00:00:00.000 Started by upstream project "autotest-per-patch" build number 132560 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.029 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.033 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.056 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.086 Using shallow fetch with depth 1 00:00:00.086 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.086 > git --version # timeout=10 00:00:00.107 > git --version # 'git version 2.39.2' 00:00:00.107 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.128 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.128 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.468 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.484 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.498 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.498 > git config core.sparsecheckout # timeout=10 00:00:05.512 > git read-tree -mu HEAD # timeout=10 00:00:05.534 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.564 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.564 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.678 [Pipeline] Start of Pipeline 00:00:05.693 [Pipeline] library 00:00:05.694 Loading library shm_lib@master 00:00:05.695 Library shm_lib@master is cached. Copying from home. 00:00:05.712 [Pipeline] node 00:00:05.722 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.724 [Pipeline] { 00:00:05.734 [Pipeline] catchError 00:00:05.736 [Pipeline] { 00:00:05.747 [Pipeline] wrap 00:00:05.755 [Pipeline] { 00:00:05.764 [Pipeline] stage 00:00:05.766 [Pipeline] { (Prologue) 00:00:05.979 [Pipeline] sh 00:00:06.260 + logger -p user.info -t JENKINS-CI 00:00:06.278 [Pipeline] echo 00:00:06.279 Node: WFP6 00:00:06.287 [Pipeline] sh 00:00:06.590 [Pipeline] setCustomBuildProperty 00:00:06.599 [Pipeline] echo 00:00:06.601 Cleanup processes 00:00:06.605 [Pipeline] sh 00:00:06.888 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.888 1485680 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.904 [Pipeline] sh 00:00:07.201 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.201 ++ grep -v 'sudo pgrep' 00:00:07.201 ++ awk '{print $1}' 00:00:07.201 + sudo kill -9 00:00:07.201 + true 00:00:07.216 [Pipeline] cleanWs 00:00:07.227 [WS-CLEANUP] Deleting project workspace... 00:00:07.227 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.233 [WS-CLEANUP] done 00:00:07.238 [Pipeline] setCustomBuildProperty 00:00:07.251 [Pipeline] sh 00:00:07.531 + sudo git config --global --replace-all safe.directory '*' 00:00:07.615 [Pipeline] httpRequest 00:00:08.280 [Pipeline] echo 00:00:08.282 Sorcerer 10.211.164.20 is alive 00:00:08.292 [Pipeline] retry 00:00:08.294 [Pipeline] { 00:00:08.308 [Pipeline] httpRequest 00:00:08.312 HttpMethod: GET 00:00:08.312 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.313 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.320 Response Code: HTTP/1.1 200 OK 00:00:08.321 Success: Status code 200 is in the accepted range: 200,404 00:00:08.321 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.188 [Pipeline] } 00:00:20.213 [Pipeline] // retry 00:00:20.221 [Pipeline] sh 00:00:20.513 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:20.530 [Pipeline] httpRequest 00:00:20.893 [Pipeline] echo 00:00:20.896 Sorcerer 10.211.164.20 is alive 00:00:20.907 [Pipeline] retry 00:00:20.909 [Pipeline] { 00:00:20.926 [Pipeline] httpRequest 00:00:20.931 HttpMethod: GET 00:00:20.932 URL: http://10.211.164.20/packages/spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:00:20.932 Sending request to url: http://10.211.164.20/packages/spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:00:20.947 Response Code: HTTP/1.1 200 OK 00:00:20.947 Success: Status code 200 is in the accepted range: 200,404 00:00:20.948 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:02:19.300 [Pipeline] } 00:02:19.318 [Pipeline] // retry 00:02:19.325 [Pipeline] sh 00:02:19.612 + tar --no-same-owner -xf spdk_a640d9f989075bd552c09c63a9eac6f5cc769887.tar.gz 00:02:22.163 [Pipeline] sh 00:02:22.448 + git -C spdk log --oneline -n5 00:02:22.448 a640d9f98 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:22.448 ae1917872 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:22.448 ff68c6e68 nvmf: Expose DIF type of namespace to host again 00:02:22.448 dd10a9655 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:02:22.448 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:22.459 [Pipeline] } 00:02:22.474 [Pipeline] // stage 00:02:22.484 [Pipeline] stage 00:02:22.486 [Pipeline] { (Prepare) 00:02:22.506 [Pipeline] writeFile 00:02:22.525 [Pipeline] sh 00:02:22.814 + logger -p user.info -t JENKINS-CI 00:02:22.827 [Pipeline] sh 00:02:23.111 + logger -p user.info -t JENKINS-CI 00:02:23.123 [Pipeline] sh 00:02:23.410 + cat autorun-spdk.conf 00:02:23.410 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.410 SPDK_TEST_NVMF=1 00:02:23.410 SPDK_TEST_NVME_CLI=1 00:02:23.410 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.410 SPDK_TEST_NVMF_NICS=e810 00:02:23.410 SPDK_TEST_VFIOUSER=1 00:02:23.410 SPDK_RUN_UBSAN=1 00:02:23.410 NET_TYPE=phy 00:02:23.417 RUN_NIGHTLY=0 00:02:23.422 [Pipeline] readFile 00:02:23.447 [Pipeline] withEnv 00:02:23.449 [Pipeline] { 00:02:23.461 [Pipeline] sh 00:02:23.746 + set -ex 00:02:23.746 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:23.746 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:23.746 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.746 ++ SPDK_TEST_NVMF=1 00:02:23.746 ++ SPDK_TEST_NVME_CLI=1 00:02:23.746 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.746 ++ SPDK_TEST_NVMF_NICS=e810 00:02:23.746 ++ SPDK_TEST_VFIOUSER=1 00:02:23.746 ++ SPDK_RUN_UBSAN=1 00:02:23.746 ++ NET_TYPE=phy 00:02:23.746 ++ RUN_NIGHTLY=0 00:02:23.746 + case $SPDK_TEST_NVMF_NICS in 00:02:23.746 + DRIVERS=ice 00:02:23.746 + [[ tcp == \r\d\m\a ]] 00:02:23.746 + [[ -n ice ]] 00:02:23.746 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:23.746 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:23.746 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:23.746 rmmod: ERROR: Module irdma is not currently loaded 00:02:23.746 rmmod: ERROR: Module i40iw is not currently loaded 00:02:23.746 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:23.746 + true 00:02:23.746 + for D in $DRIVERS 00:02:23.746 + sudo modprobe ice 00:02:23.746 + exit 0 00:02:23.756 [Pipeline] } 00:02:23.771 [Pipeline] // withEnv 00:02:23.777 [Pipeline] } 00:02:23.791 [Pipeline] // stage 00:02:23.800 [Pipeline] catchError 00:02:23.801 [Pipeline] { 00:02:23.816 [Pipeline] timeout 00:02:23.817 Timeout set to expire in 1 hr 0 min 00:02:23.819 [Pipeline] { 00:02:23.833 [Pipeline] stage 00:02:23.835 [Pipeline] { (Tests) 00:02:23.849 [Pipeline] sh 00:02:24.140 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.140 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.140 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.140 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:24.140 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.140 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:24.140 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:24.140 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:24.140 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:24.140 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:24.140 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:24.140 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.140 + source /etc/os-release 00:02:24.140 ++ NAME='Fedora Linux' 00:02:24.140 ++ VERSION='39 (Cloud Edition)' 00:02:24.140 ++ ID=fedora 00:02:24.140 ++ VERSION_ID=39 00:02:24.140 ++ VERSION_CODENAME= 00:02:24.140 ++ PLATFORM_ID=platform:f39 00:02:24.140 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:24.140 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:24.140 ++ LOGO=fedora-logo-icon 00:02:24.140 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:24.140 ++ HOME_URL=https://fedoraproject.org/ 00:02:24.140 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:24.140 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:24.140 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:24.140 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:24.140 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:24.140 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:24.140 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:24.140 ++ SUPPORT_END=2024-11-12 00:02:24.140 ++ VARIANT='Cloud Edition' 00:02:24.140 ++ VARIANT_ID=cloud 00:02:24.140 + uname -a 00:02:24.140 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:24.140 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:26.703 Hugepages 00:02:26.703 node hugesize free / total 00:02:26.703 node0 1048576kB 0 / 0 00:02:26.703 node0 2048kB 0 / 0 00:02:26.703 node1 1048576kB 0 / 0 00:02:26.703 node1 2048kB 0 / 0 00:02:26.704 00:02:26.704 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:26.704 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:26.704 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:26.704 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:26.704 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:26.704 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:26.704 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:26.704 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:26.704 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:26.704 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:26.704 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:26.704 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:26.704 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:26.704 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:26.704 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:26.704 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:26.704 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:26.704 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:26.704 + rm -f /tmp/spdk-ld-path 00:02:26.704 + source autorun-spdk.conf 00:02:26.704 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.704 ++ SPDK_TEST_NVMF=1 00:02:26.704 ++ SPDK_TEST_NVME_CLI=1 00:02:26.704 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.704 ++ SPDK_TEST_NVMF_NICS=e810 00:02:26.704 ++ SPDK_TEST_VFIOUSER=1 00:02:26.704 ++ SPDK_RUN_UBSAN=1 00:02:26.704 ++ NET_TYPE=phy 00:02:26.704 ++ RUN_NIGHTLY=0 00:02:26.704 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:26.704 + [[ -n '' ]] 00:02:26.704 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.704 + for M in /var/spdk/build-*-manifest.txt 00:02:26.704 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:26.704 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:26.704 + for M in /var/spdk/build-*-manifest.txt 00:02:26.704 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:26.704 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:26.704 + for M in /var/spdk/build-*-manifest.txt 00:02:26.704 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:26.704 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:26.704 ++ uname 00:02:26.704 + [[ Linux == \L\i\n\u\x ]] 00:02:26.704 + sudo dmesg -T 00:02:26.964 + sudo dmesg --clear 00:02:26.964 + dmesg_pid=1487114 00:02:26.964 + [[ Fedora Linux == FreeBSD ]] 00:02:26.964 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.964 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.964 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.964 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.964 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.964 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.964 + sudo dmesg -Tw 00:02:26.964 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.964 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.964 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.964 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.964 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.964 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.964 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.964 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.964 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.964 05:24:14 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:26.964 05:24:14 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:26.964 05:24:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:26.964 05:24:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:26.964 05:24:14 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.964 05:24:14 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:26.964 05:24:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:26.964 05:24:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.964 05:24:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.964 05:24:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.964 05:24:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.964 05:24:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.964 05:24:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.964 05:24:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.964 05:24:14 -- paths/export.sh@5 -- $ export PATH 00:02:26.964 05:24:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.964 05:24:14 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.964 05:24:14 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:26.964 05:24:14 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732681454.XXXXXX 00:02:26.964 05:24:14 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732681454.OJgOY7 00:02:26.964 05:24:14 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:26.964 05:24:14 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:26.964 05:24:14 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:26.964 05:24:14 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:26.964 05:24:14 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.964 05:24:14 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:26.964 05:24:14 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:26.964 05:24:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.964 05:24:14 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:26.964 05:24:14 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:26.964 05:24:14 -- pm/common@17 -- $ local monitor 00:02:26.964 05:24:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.964 05:24:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.964 05:24:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.964 05:24:14 -- pm/common@21 -- $ date +%s 00:02:26.964 05:24:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.964 05:24:14 -- pm/common@21 -- $ date +%s 00:02:26.964 05:24:14 -- pm/common@25 -- $ sleep 1 00:02:26.964 05:24:14 -- pm/common@21 -- $ date +%s 00:02:26.964 05:24:14 -- pm/common@21 -- $ date +%s 00:02:26.964 05:24:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681454 00:02:26.964 05:24:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681454 00:02:26.964 05:24:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681454 00:02:26.964 05:24:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732681454 00:02:26.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681454_collect-cpu-load.pm.log 00:02:26.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681454_collect-vmstat.pm.log 00:02:26.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681454_collect-cpu-temp.pm.log 00:02:26.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732681454_collect-bmc-pm.bmc.pm.log 00:02:28.344 05:24:15 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:28.344 05:24:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:28.344 05:24:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:28.344 05:24:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.344 05:24:15 -- spdk/autobuild.sh@16 -- $ date -u 00:02:28.344 Wed Nov 27 04:24:15 AM UTC 2024 00:02:28.344 05:24:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:28.344 v25.01-pre-275-ga640d9f98 00:02:28.344 05:24:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:28.344 05:24:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:28.344 05:24:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:28.344 05:24:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:28.344 05:24:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:28.344 05:24:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.344 ************************************ 00:02:28.344 START TEST ubsan 00:02:28.344 ************************************ 00:02:28.344 05:24:15 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:28.344 using ubsan 00:02:28.344 00:02:28.344 real 0m0.000s 00:02:28.344 user 0m0.000s 00:02:28.344 sys 0m0.000s 00:02:28.344 05:24:15 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:28.344 05:24:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:28.344 ************************************ 00:02:28.344 END TEST ubsan 00:02:28.344 ************************************ 00:02:28.344 05:24:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:28.344 05:24:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:28.344 05:24:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:28.344 05:24:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:28.344 05:24:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:28.344 05:24:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:28.344 05:24:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:28.344 05:24:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:28.344 05:24:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:28.344 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:28.344 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:28.603 Using 'verbs' RDMA provider 00:02:41.758 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:53.969 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:53.969 Creating mk/config.mk...done. 00:02:53.969 Creating mk/cc.flags.mk...done. 00:02:53.969 Type 'make' to build. 00:02:53.969 05:24:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:53.969 05:24:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:53.969 05:24:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:53.969 05:24:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.969 ************************************ 00:02:53.969 START TEST make 00:02:53.969 ************************************ 00:02:53.969 05:24:41 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:53.969 make[1]: Nothing to be done for 'all'. 00:02:55.352 The Meson build system 00:02:55.352 Version: 1.5.0 00:02:55.352 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:55.352 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:55.352 Build type: native build 00:02:55.352 Project name: libvfio-user 00:02:55.352 Project version: 0.0.1 00:02:55.352 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:55.352 C linker for the host machine: cc ld.bfd 2.40-14 00:02:55.352 Host machine cpu family: x86_64 00:02:55.352 Host machine cpu: x86_64 00:02:55.352 Run-time dependency threads found: YES 00:02:55.352 Library dl found: YES 00:02:55.352 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:55.352 Run-time dependency json-c found: YES 0.17 00:02:55.352 Run-time dependency cmocka found: YES 1.1.7 00:02:55.352 Program pytest-3 found: NO 00:02:55.352 Program flake8 found: NO 00:02:55.352 Program misspell-fixer found: NO 00:02:55.352 Program restructuredtext-lint found: NO 00:02:55.352 Program valgrind found: YES (/usr/bin/valgrind) 00:02:55.352 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:55.352 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:55.352 Compiler for C supports arguments -Wwrite-strings: YES 00:02:55.352 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:55.352 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:55.352 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:55.352 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:55.352 Build targets in project: 8 00:02:55.352 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:55.352 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:55.352 00:02:55.352 libvfio-user 0.0.1 00:02:55.352 00:02:55.352 User defined options 00:02:55.352 buildtype : debug 00:02:55.352 default_library: shared 00:02:55.352 libdir : /usr/local/lib 00:02:55.352 00:02:55.352 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:55.920 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:56.179 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:56.179 [2/37] Compiling C object samples/null.p/null.c.o 00:02:56.179 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:56.179 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:56.179 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:56.179 [6/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:56.179 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:56.179 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:56.179 [9/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:56.179 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:56.179 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:56.179 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:56.179 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:56.179 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:56.179 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:56.179 [16/37] Compiling C object samples/server.p/server.c.o 00:02:56.179 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:56.179 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:56.179 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:56.179 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:56.179 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:56.179 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:56.179 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:56.179 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:56.179 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:56.179 [26/37] Compiling C object samples/client.p/client.c.o 00:02:56.179 [27/37] Linking target samples/client 00:02:56.179 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:56.437 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:56.438 [30/37] Linking target test/unit_tests 00:02:56.438 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:56.438 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:56.438 [33/37] Linking target samples/null 00:02:56.438 [34/37] Linking target samples/server 00:02:56.438 [35/37] Linking target samples/lspci 00:02:56.438 [36/37] Linking target samples/gpio-pci-idio-16 00:02:56.438 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:56.438 INFO: autodetecting backend as ninja 00:02:56.438 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:56.697 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:56.957 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:56.957 ninja: no work to do. 00:03:02.266 The Meson build system 00:03:02.266 Version: 1.5.0 00:03:02.266 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:02.266 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:02.266 Build type: native build 00:03:02.266 Program cat found: YES (/usr/bin/cat) 00:03:02.266 Project name: DPDK 00:03:02.266 Project version: 24.03.0 00:03:02.266 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:02.266 C linker for the host machine: cc ld.bfd 2.40-14 00:03:02.266 Host machine cpu family: x86_64 00:03:02.266 Host machine cpu: x86_64 00:03:02.266 Message: ## Building in Developer Mode ## 00:03:02.266 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:02.266 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:02.266 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:02.266 Program python3 found: YES (/usr/bin/python3) 00:03:02.266 Program cat found: YES (/usr/bin/cat) 00:03:02.266 Compiler for C supports arguments -march=native: YES 00:03:02.266 Checking for size of "void *" : 8 00:03:02.266 Checking for size of "void *" : 8 (cached) 00:03:02.266 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:02.266 Library m found: YES 00:03:02.266 Library numa found: YES 00:03:02.266 Has header "numaif.h" : YES 00:03:02.266 Library fdt found: NO 00:03:02.266 Library execinfo found: NO 00:03:02.266 Has header "execinfo.h" : YES 00:03:02.266 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:02.266 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:02.266 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:02.266 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:02.266 Run-time dependency openssl found: YES 3.1.1 00:03:02.266 Run-time dependency libpcap found: YES 1.10.4 00:03:02.266 Has header "pcap.h" with dependency libpcap: YES 00:03:02.266 Compiler for C supports arguments -Wcast-qual: YES 00:03:02.266 Compiler for C supports arguments -Wdeprecated: YES 00:03:02.266 Compiler for C supports arguments -Wformat: YES 00:03:02.266 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:02.266 Compiler for C supports arguments -Wformat-security: NO 00:03:02.266 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:02.266 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:02.266 Compiler for C supports arguments -Wnested-externs: YES 00:03:02.266 Compiler for C supports arguments -Wold-style-definition: YES 00:03:02.266 Compiler for C supports arguments -Wpointer-arith: YES 00:03:02.266 Compiler for C supports arguments -Wsign-compare: YES 00:03:02.266 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:02.266 Compiler for C supports arguments -Wundef: YES 00:03:02.266 Compiler for C supports arguments -Wwrite-strings: YES 00:03:02.266 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:02.266 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:02.266 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:02.266 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:02.266 Program objdump found: YES (/usr/bin/objdump) 00:03:02.266 Compiler for C supports arguments -mavx512f: YES 00:03:02.266 Checking if "AVX512 checking" compiles: YES 00:03:02.266 Fetching value of define "__SSE4_2__" : 1 00:03:02.266 Fetching value of define "__AES__" : 1 00:03:02.266 Fetching value of define "__AVX__" : 1 00:03:02.266 Fetching value of define "__AVX2__" : 1 00:03:02.266 Fetching value of define "__AVX512BW__" : 1 00:03:02.266 Fetching value of define "__AVX512CD__" : 1 00:03:02.266 Fetching value of define "__AVX512DQ__" : 1 00:03:02.266 Fetching value of define "__AVX512F__" : 1 00:03:02.266 Fetching value of define "__AVX512VL__" : 1 00:03:02.266 Fetching value of define "__PCLMUL__" : 1 00:03:02.266 Fetching value of define "__RDRND__" : 1 00:03:02.266 Fetching value of define "__RDSEED__" : 1 00:03:02.266 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:02.266 Fetching value of define "__znver1__" : (undefined) 00:03:02.266 Fetching value of define "__znver2__" : (undefined) 00:03:02.266 Fetching value of define "__znver3__" : (undefined) 00:03:02.266 Fetching value of define "__znver4__" : (undefined) 00:03:02.266 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:02.266 Message: lib/log: Defining dependency "log" 00:03:02.266 Message: lib/kvargs: Defining dependency "kvargs" 00:03:02.266 Message: lib/telemetry: Defining dependency "telemetry" 00:03:02.266 Checking for function "getentropy" : NO 00:03:02.266 Message: lib/eal: Defining dependency "eal" 00:03:02.266 Message: lib/ring: Defining dependency "ring" 00:03:02.266 Message: lib/rcu: Defining dependency "rcu" 00:03:02.266 Message: lib/mempool: Defining dependency "mempool" 00:03:02.266 Message: lib/mbuf: Defining dependency "mbuf" 00:03:02.266 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:02.266 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:02.266 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:02.266 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:02.266 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:02.266 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:02.266 Compiler for C supports arguments -mpclmul: YES 00:03:02.267 Compiler for C supports arguments -maes: YES 00:03:02.267 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:02.267 Compiler for C supports arguments -mavx512bw: YES 00:03:02.267 Compiler for C supports arguments -mavx512dq: YES 00:03:02.267 Compiler for C supports arguments -mavx512vl: YES 00:03:02.267 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:02.267 Compiler for C supports arguments -mavx2: YES 00:03:02.267 Compiler for C supports arguments -mavx: YES 00:03:02.267 Message: lib/net: Defining dependency "net" 00:03:02.267 Message: lib/meter: Defining dependency "meter" 00:03:02.267 Message: lib/ethdev: Defining dependency "ethdev" 00:03:02.267 Message: lib/pci: Defining dependency "pci" 00:03:02.267 Message: lib/cmdline: Defining dependency "cmdline" 00:03:02.267 Message: lib/hash: Defining dependency "hash" 00:03:02.267 Message: lib/timer: Defining dependency "timer" 00:03:02.267 Message: lib/compressdev: Defining dependency "compressdev" 00:03:02.267 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:02.267 Message: lib/dmadev: Defining dependency "dmadev" 00:03:02.267 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:02.267 Message: lib/power: Defining dependency "power" 00:03:02.267 Message: lib/reorder: Defining dependency "reorder" 00:03:02.267 Message: lib/security: Defining dependency "security" 00:03:02.267 Has header "linux/userfaultfd.h" : YES 00:03:02.267 Has header "linux/vduse.h" : YES 00:03:02.267 Message: lib/vhost: Defining dependency "vhost" 00:03:02.267 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:02.267 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:02.267 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:02.267 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:02.267 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:02.267 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:02.267 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:02.267 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:02.267 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:02.267 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:02.267 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:02.267 Configuring doxy-api-html.conf using configuration 00:03:02.267 Configuring doxy-api-man.conf using configuration 00:03:02.267 Program mandb found: YES (/usr/bin/mandb) 00:03:02.267 Program sphinx-build found: NO 00:03:02.267 Configuring rte_build_config.h using configuration 00:03:02.267 Message: 00:03:02.267 ================= 00:03:02.267 Applications Enabled 00:03:02.267 ================= 00:03:02.267 00:03:02.267 apps: 00:03:02.267 00:03:02.267 00:03:02.267 Message: 00:03:02.267 ================= 00:03:02.267 Libraries Enabled 00:03:02.267 ================= 00:03:02.267 00:03:02.267 libs: 00:03:02.267 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:02.267 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:02.267 cryptodev, dmadev, power, reorder, security, vhost, 00:03:02.267 00:03:02.267 Message: 00:03:02.267 =============== 00:03:02.267 Drivers Enabled 00:03:02.267 =============== 00:03:02.267 00:03:02.267 common: 00:03:02.267 00:03:02.267 bus: 00:03:02.267 pci, vdev, 00:03:02.267 mempool: 00:03:02.267 ring, 00:03:02.267 dma: 00:03:02.267 00:03:02.267 net: 00:03:02.267 00:03:02.267 crypto: 00:03:02.267 00:03:02.267 compress: 00:03:02.267 00:03:02.267 vdpa: 00:03:02.267 00:03:02.267 00:03:02.267 Message: 00:03:02.267 ================= 00:03:02.267 Content Skipped 00:03:02.267 ================= 00:03:02.267 00:03:02.267 apps: 00:03:02.267 dumpcap: explicitly disabled via build config 00:03:02.267 graph: explicitly disabled via build config 00:03:02.267 pdump: explicitly disabled via build config 00:03:02.267 proc-info: explicitly disabled via build config 00:03:02.267 test-acl: explicitly disabled via build config 00:03:02.267 test-bbdev: explicitly disabled via build config 00:03:02.267 test-cmdline: explicitly disabled via build config 00:03:02.267 test-compress-perf: explicitly disabled via build config 00:03:02.267 test-crypto-perf: explicitly disabled via build config 00:03:02.267 test-dma-perf: explicitly disabled via build config 00:03:02.267 test-eventdev: explicitly disabled via build config 00:03:02.267 test-fib: explicitly disabled via build config 00:03:02.267 test-flow-perf: explicitly disabled via build config 00:03:02.267 test-gpudev: explicitly disabled via build config 00:03:02.267 test-mldev: explicitly disabled via build config 00:03:02.267 test-pipeline: explicitly disabled via build config 00:03:02.267 test-pmd: explicitly disabled via build config 00:03:02.267 test-regex: explicitly disabled via build config 00:03:02.267 test-sad: explicitly disabled via build config 00:03:02.267 test-security-perf: explicitly disabled via build config 00:03:02.267 00:03:02.267 libs: 00:03:02.267 argparse: explicitly disabled via build config 00:03:02.267 metrics: explicitly disabled via build config 00:03:02.267 acl: explicitly disabled via build config 00:03:02.267 bbdev: explicitly disabled via build config 00:03:02.267 bitratestats: explicitly disabled via build config 00:03:02.267 bpf: explicitly disabled via build config 00:03:02.267 cfgfile: explicitly disabled via build config 00:03:02.267 distributor: explicitly disabled via build config 00:03:02.267 efd: explicitly disabled via build config 00:03:02.267 eventdev: explicitly disabled via build config 00:03:02.267 dispatcher: explicitly disabled via build config 00:03:02.267 gpudev: explicitly disabled via build config 00:03:02.267 gro: explicitly disabled via build config 00:03:02.267 gso: explicitly disabled via build config 00:03:02.267 ip_frag: explicitly disabled via build config 00:03:02.267 jobstats: explicitly disabled via build config 00:03:02.267 latencystats: explicitly disabled via build config 00:03:02.267 lpm: explicitly disabled via build config 00:03:02.267 member: explicitly disabled via build config 00:03:02.267 pcapng: explicitly disabled via build config 00:03:02.267 rawdev: explicitly disabled via build config 00:03:02.267 regexdev: explicitly disabled via build config 00:03:02.267 mldev: explicitly disabled via build config 00:03:02.267 rib: explicitly disabled via build config 00:03:02.267 sched: explicitly disabled via build config 00:03:02.267 stack: explicitly disabled via build config 00:03:02.267 ipsec: explicitly disabled via build config 00:03:02.267 pdcp: explicitly disabled via build config 00:03:02.267 fib: explicitly disabled via build config 00:03:02.267 port: explicitly disabled via build config 00:03:02.267 pdump: explicitly disabled via build config 00:03:02.267 table: explicitly disabled via build config 00:03:02.267 pipeline: explicitly disabled via build config 00:03:02.267 graph: explicitly disabled via build config 00:03:02.267 node: explicitly disabled via build config 00:03:02.268 00:03:02.268 drivers: 00:03:02.268 common/cpt: not in enabled drivers build config 00:03:02.268 common/dpaax: not in enabled drivers build config 00:03:02.268 common/iavf: not in enabled drivers build config 00:03:02.268 common/idpf: not in enabled drivers build config 00:03:02.268 common/ionic: not in enabled drivers build config 00:03:02.268 common/mvep: not in enabled drivers build config 00:03:02.268 common/octeontx: not in enabled drivers build config 00:03:02.268 bus/auxiliary: not in enabled drivers build config 00:03:02.268 bus/cdx: not in enabled drivers build config 00:03:02.268 bus/dpaa: not in enabled drivers build config 00:03:02.268 bus/fslmc: not in enabled drivers build config 00:03:02.268 bus/ifpga: not in enabled drivers build config 00:03:02.268 bus/platform: not in enabled drivers build config 00:03:02.268 bus/uacce: not in enabled drivers build config 00:03:02.268 bus/vmbus: not in enabled drivers build config 00:03:02.268 common/cnxk: not in enabled drivers build config 00:03:02.268 common/mlx5: not in enabled drivers build config 00:03:02.268 common/nfp: not in enabled drivers build config 00:03:02.268 common/nitrox: not in enabled drivers build config 00:03:02.268 common/qat: not in enabled drivers build config 00:03:02.268 common/sfc_efx: not in enabled drivers build config 00:03:02.268 mempool/bucket: not in enabled drivers build config 00:03:02.268 mempool/cnxk: not in enabled drivers build config 00:03:02.268 mempool/dpaa: not in enabled drivers build config 00:03:02.268 mempool/dpaa2: not in enabled drivers build config 00:03:02.268 mempool/octeontx: not in enabled drivers build config 00:03:02.268 mempool/stack: not in enabled drivers build config 00:03:02.268 dma/cnxk: not in enabled drivers build config 00:03:02.268 dma/dpaa: not in enabled drivers build config 00:03:02.268 dma/dpaa2: not in enabled drivers build config 00:03:02.268 dma/hisilicon: not in enabled drivers build config 00:03:02.268 dma/idxd: not in enabled drivers build config 00:03:02.268 dma/ioat: not in enabled drivers build config 00:03:02.268 dma/skeleton: not in enabled drivers build config 00:03:02.268 net/af_packet: not in enabled drivers build config 00:03:02.268 net/af_xdp: not in enabled drivers build config 00:03:02.268 net/ark: not in enabled drivers build config 00:03:02.268 net/atlantic: not in enabled drivers build config 00:03:02.268 net/avp: not in enabled drivers build config 00:03:02.268 net/axgbe: not in enabled drivers build config 00:03:02.268 net/bnx2x: not in enabled drivers build config 00:03:02.268 net/bnxt: not in enabled drivers build config 00:03:02.268 net/bonding: not in enabled drivers build config 00:03:02.268 net/cnxk: not in enabled drivers build config 00:03:02.268 net/cpfl: not in enabled drivers build config 00:03:02.268 net/cxgbe: not in enabled drivers build config 00:03:02.268 net/dpaa: not in enabled drivers build config 00:03:02.268 net/dpaa2: not in enabled drivers build config 00:03:02.268 net/e1000: not in enabled drivers build config 00:03:02.268 net/ena: not in enabled drivers build config 00:03:02.268 net/enetc: not in enabled drivers build config 00:03:02.268 net/enetfec: not in enabled drivers build config 00:03:02.268 net/enic: not in enabled drivers build config 00:03:02.268 net/failsafe: not in enabled drivers build config 00:03:02.268 net/fm10k: not in enabled drivers build config 00:03:02.268 net/gve: not in enabled drivers build config 00:03:02.268 net/hinic: not in enabled drivers build config 00:03:02.268 net/hns3: not in enabled drivers build config 00:03:02.268 net/i40e: not in enabled drivers build config 00:03:02.268 net/iavf: not in enabled drivers build config 00:03:02.268 net/ice: not in enabled drivers build config 00:03:02.268 net/idpf: not in enabled drivers build config 00:03:02.268 net/igc: not in enabled drivers build config 00:03:02.268 net/ionic: not in enabled drivers build config 00:03:02.268 net/ipn3ke: not in enabled drivers build config 00:03:02.268 net/ixgbe: not in enabled drivers build config 00:03:02.268 net/mana: not in enabled drivers build config 00:03:02.268 net/memif: not in enabled drivers build config 00:03:02.268 net/mlx4: not in enabled drivers build config 00:03:02.268 net/mlx5: not in enabled drivers build config 00:03:02.268 net/mvneta: not in enabled drivers build config 00:03:02.268 net/mvpp2: not in enabled drivers build config 00:03:02.268 net/netvsc: not in enabled drivers build config 00:03:02.268 net/nfb: not in enabled drivers build config 00:03:02.268 net/nfp: not in enabled drivers build config 00:03:02.268 net/ngbe: not in enabled drivers build config 00:03:02.268 net/null: not in enabled drivers build config 00:03:02.268 net/octeontx: not in enabled drivers build config 00:03:02.268 net/octeon_ep: not in enabled drivers build config 00:03:02.268 net/pcap: not in enabled drivers build config 00:03:02.268 net/pfe: not in enabled drivers build config 00:03:02.268 net/qede: not in enabled drivers build config 00:03:02.268 net/ring: not in enabled drivers build config 00:03:02.268 net/sfc: not in enabled drivers build config 00:03:02.268 net/softnic: not in enabled drivers build config 00:03:02.268 net/tap: not in enabled drivers build config 00:03:02.268 net/thunderx: not in enabled drivers build config 00:03:02.268 net/txgbe: not in enabled drivers build config 00:03:02.268 net/vdev_netvsc: not in enabled drivers build config 00:03:02.268 net/vhost: not in enabled drivers build config 00:03:02.268 net/virtio: not in enabled drivers build config 00:03:02.268 net/vmxnet3: not in enabled drivers build config 00:03:02.268 raw/*: missing internal dependency, "rawdev" 00:03:02.268 crypto/armv8: not in enabled drivers build config 00:03:02.268 crypto/bcmfs: not in enabled drivers build config 00:03:02.268 crypto/caam_jr: not in enabled drivers build config 00:03:02.268 crypto/ccp: not in enabled drivers build config 00:03:02.268 crypto/cnxk: not in enabled drivers build config 00:03:02.268 crypto/dpaa_sec: not in enabled drivers build config 00:03:02.268 crypto/dpaa2_sec: not in enabled drivers build config 00:03:02.268 crypto/ipsec_mb: not in enabled drivers build config 00:03:02.268 crypto/mlx5: not in enabled drivers build config 00:03:02.268 crypto/mvsam: not in enabled drivers build config 00:03:02.268 crypto/nitrox: not in enabled drivers build config 00:03:02.268 crypto/null: not in enabled drivers build config 00:03:02.268 crypto/octeontx: not in enabled drivers build config 00:03:02.268 crypto/openssl: not in enabled drivers build config 00:03:02.268 crypto/scheduler: not in enabled drivers build config 00:03:02.268 crypto/uadk: not in enabled drivers build config 00:03:02.268 crypto/virtio: not in enabled drivers build config 00:03:02.268 compress/isal: not in enabled drivers build config 00:03:02.268 compress/mlx5: not in enabled drivers build config 00:03:02.268 compress/nitrox: not in enabled drivers build config 00:03:02.268 compress/octeontx: not in enabled drivers build config 00:03:02.268 compress/zlib: not in enabled drivers build config 00:03:02.268 regex/*: missing internal dependency, "regexdev" 00:03:02.268 ml/*: missing internal dependency, "mldev" 00:03:02.268 vdpa/ifc: not in enabled drivers build config 00:03:02.268 vdpa/mlx5: not in enabled drivers build config 00:03:02.268 vdpa/nfp: not in enabled drivers build config 00:03:02.268 vdpa/sfc: not in enabled drivers build config 00:03:02.268 event/*: missing internal dependency, "eventdev" 00:03:02.268 baseband/*: missing internal dependency, "bbdev" 00:03:02.268 gpu/*: missing internal dependency, "gpudev" 00:03:02.268 00:03:02.268 00:03:02.268 Build targets in project: 85 00:03:02.268 00:03:02.268 DPDK 24.03.0 00:03:02.268 00:03:02.268 User defined options 00:03:02.268 buildtype : debug 00:03:02.268 default_library : shared 00:03:02.268 libdir : lib 00:03:02.269 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:02.269 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:02.269 c_link_args : 00:03:02.269 cpu_instruction_set: native 00:03:02.269 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:02.269 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:02.269 enable_docs : false 00:03:02.269 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:02.269 enable_kmods : false 00:03:02.269 max_lcores : 128 00:03:02.269 tests : false 00:03:02.269 00:03:02.269 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.842 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:02.842 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:02.842 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:02.842 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:02.842 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:02.842 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:02.842 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:02.842 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:02.842 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:02.842 [9/268] Linking static target lib/librte_kvargs.a 00:03:02.842 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:02.842 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:02.842 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:02.842 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:02.842 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:02.842 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:02.842 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:02.842 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:02.842 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.101 [19/268] Linking static target lib/librte_log.a 00:03:03.101 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.101 [21/268] Linking static target lib/librte_pci.a 00:03:03.101 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.101 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.101 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.364 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.364 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:03.364 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:03.364 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:03.364 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.364 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:03.364 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:03.364 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:03.364 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:03.364 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:03.364 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:03.364 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:03.364 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:03.364 [38/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:03.364 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:03.364 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:03.364 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:03.364 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:03.364 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:03.364 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:03.364 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:03.364 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.364 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.364 [48/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:03.364 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:03.364 [50/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.364 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:03.364 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:03.364 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:03.364 [54/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:03.364 [55/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.364 [56/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:03.364 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:03.364 [58/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:03.364 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:03.364 [60/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:03.364 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:03.364 [62/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.364 [63/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:03.365 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:03.365 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:03.365 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:03.365 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:03.365 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:03.365 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:03.365 [70/268] Linking static target lib/librte_ring.a 00:03:03.365 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:03.365 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:03.365 [73/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:03.365 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:03.365 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:03.365 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:03.365 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:03.365 [78/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:03.365 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:03.365 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:03.365 [81/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:03.365 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:03.365 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:03.365 [84/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:03.365 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:03.365 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:03.365 [87/268] Linking static target lib/librte_meter.a 00:03:03.365 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:03.365 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:03.365 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:03.365 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:03.365 [92/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.365 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:03.365 [94/268] Linking static target lib/librte_telemetry.a 00:03:03.365 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:03.365 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:03.365 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:03.365 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:03.624 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:03.624 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.624 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:03.624 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.624 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:03.624 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:03.624 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:03.624 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:03.624 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.624 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:03.624 [109/268] Linking static target lib/librte_mempool.a 00:03:03.624 [110/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.624 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:03.624 [112/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:03.624 [113/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:03.624 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:03.624 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:03.624 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:03.624 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:03.624 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:03.624 [119/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:03.624 [120/268] Linking static target lib/librte_net.a 00:03:03.624 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:03.624 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.624 [123/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.624 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:03.624 [125/268] Linking static target lib/librte_rcu.a 00:03:03.625 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:03.625 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:03.625 [128/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:03.625 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:03.625 [130/268] Linking static target lib/librte_eal.a 00:03:03.625 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.625 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.625 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:03.625 [134/268] Linking static target lib/librte_cmdline.a 00:03:03.625 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.625 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:03.625 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:03.625 [138/268] Linking static target lib/librte_mbuf.a 00:03:03.885 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.885 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.885 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:03.885 [142/268] Linking static target lib/librte_timer.a 00:03:03.885 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.885 [144/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.885 [145/268] Linking target lib/librte_log.so.24.1 00:03:03.885 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:03.885 [147/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:03.885 [148/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:03.885 [149/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.885 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:03.885 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.885 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:03.885 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:03.885 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:03.885 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:03.885 [156/268] Linking static target lib/librte_dmadev.a 00:03:03.885 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:03.885 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.885 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:03.885 [160/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.885 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:03.885 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.885 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:03.885 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:03.885 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:03.885 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:03.885 [167/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.885 [168/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.885 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:03.885 [170/268] Linking static target lib/librte_reorder.a 00:03:03.885 [171/268] Linking target lib/librte_telemetry.so.24.1 00:03:03.885 [172/268] Linking target lib/librte_kvargs.so.24.1 00:03:04.145 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.145 [174/268] Linking static target lib/librte_power.a 00:03:04.145 [175/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:04.145 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.145 [177/268] Linking static target lib/librte_security.a 00:03:04.145 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.145 [179/268] Linking static target lib/librte_compressdev.a 00:03:04.145 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.145 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.145 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:04.145 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.145 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.145 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.145 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.145 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.145 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.145 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:04.145 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.145 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:04.145 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.145 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.145 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.145 [195/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.145 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:04.145 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.145 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.403 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.403 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.403 [201/268] Linking static target lib/librte_hash.a 00:03:04.403 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.403 [203/268] Linking static target drivers/librte_bus_vdev.a 00:03:04.403 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.403 [205/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.403 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.403 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.403 [208/268] Linking static target drivers/librte_bus_pci.a 00:03:04.403 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.403 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.403 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.403 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.403 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.403 [214/268] Linking static target drivers/librte_mempool_ring.a 00:03:04.403 [215/268] Linking static target lib/librte_cryptodev.a 00:03:04.403 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.661 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.661 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.661 [219/268] Linking static target lib/librte_ethdev.a 00:03:04.661 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.661 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.661 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.919 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.919 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.919 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.178 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.178 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.114 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.114 [229/268] Linking static target lib/librte_vhost.a 00:03:06.372 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.750 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.133 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.706 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.706 [234/268] Linking target lib/librte_eal.so.24.1 00:03:13.965 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:13.965 [236/268] Linking target lib/librte_ring.so.24.1 00:03:13.965 [237/268] Linking target lib/librte_meter.so.24.1 00:03:13.965 [238/268] Linking target lib/librte_pci.so.24.1 00:03:13.965 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:13.965 [240/268] Linking target lib/librte_dmadev.so.24.1 00:03:13.965 [241/268] Linking target lib/librte_timer.so.24.1 00:03:13.965 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:13.965 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:13.965 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:13.965 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:13.965 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:13.965 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:13.965 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:13.965 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:14.224 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:14.224 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:14.224 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:14.224 [253/268] Linking target lib/librte_mbuf.so.24.1 00:03:14.483 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:14.483 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:14.483 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:14.483 [257/268] Linking target lib/librte_net.so.24.1 00:03:14.483 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:03:14.483 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:14.483 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:14.742 [261/268] Linking target lib/librte_hash.so.24.1 00:03:14.742 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:14.742 [263/268] Linking target lib/librte_security.so.24.1 00:03:14.742 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:14.742 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:14.742 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:14.742 [267/268] Linking target lib/librte_power.so.24.1 00:03:14.742 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:14.742 INFO: autodetecting backend as ninja 00:03:14.742 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:26.974 CC lib/log/log.o 00:03:26.974 CC lib/log/log_flags.o 00:03:26.974 CC lib/log/log_deprecated.o 00:03:26.974 CC lib/ut_mock/mock.o 00:03:26.974 CC lib/ut/ut.o 00:03:26.974 LIB libspdk_log.a 00:03:26.974 LIB libspdk_ut_mock.a 00:03:26.974 LIB libspdk_ut.a 00:03:26.974 SO libspdk_log.so.7.1 00:03:26.974 SO libspdk_ut_mock.so.6.0 00:03:26.974 SO libspdk_ut.so.2.0 00:03:26.974 SYMLINK libspdk_log.so 00:03:26.974 SYMLINK libspdk_ut_mock.so 00:03:26.974 SYMLINK libspdk_ut.so 00:03:26.974 CC lib/ioat/ioat.o 00:03:26.974 CXX lib/trace_parser/trace.o 00:03:26.974 CC lib/dma/dma.o 00:03:26.974 CC lib/util/base64.o 00:03:26.974 CC lib/util/bit_array.o 00:03:26.974 CC lib/util/cpuset.o 00:03:26.974 CC lib/util/crc16.o 00:03:26.974 CC lib/util/crc32.o 00:03:26.974 CC lib/util/crc32c.o 00:03:26.974 CC lib/util/crc32_ieee.o 00:03:26.974 CC lib/util/crc64.o 00:03:26.974 CC lib/util/dif.o 00:03:26.974 CC lib/util/fd.o 00:03:26.974 CC lib/util/fd_group.o 00:03:26.974 CC lib/util/file.o 00:03:26.974 CC lib/util/hexlify.o 00:03:26.974 CC lib/util/iov.o 00:03:26.974 CC lib/util/math.o 00:03:26.974 CC lib/util/net.o 00:03:26.974 CC lib/util/pipe.o 00:03:26.974 CC lib/util/strerror_tls.o 00:03:26.974 CC lib/util/string.o 00:03:26.974 CC lib/util/xor.o 00:03:26.974 CC lib/util/uuid.o 00:03:26.974 CC lib/util/zipf.o 00:03:26.974 CC lib/util/md5.o 00:03:26.974 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.974 CC lib/vfio_user/host/vfio_user.o 00:03:26.974 LIB libspdk_dma.a 00:03:26.974 SO libspdk_dma.so.5.0 00:03:26.974 LIB libspdk_ioat.a 00:03:26.974 SYMLINK libspdk_dma.so 00:03:26.974 SO libspdk_ioat.so.7.0 00:03:26.974 SYMLINK libspdk_ioat.so 00:03:26.974 LIB libspdk_vfio_user.a 00:03:26.974 SO libspdk_vfio_user.so.5.0 00:03:26.974 SYMLINK libspdk_vfio_user.so 00:03:26.974 LIB libspdk_util.a 00:03:26.974 SO libspdk_util.so.10.1 00:03:26.974 SYMLINK libspdk_util.so 00:03:26.974 LIB libspdk_trace_parser.a 00:03:26.974 SO libspdk_trace_parser.so.6.0 00:03:26.974 SYMLINK libspdk_trace_parser.so 00:03:26.974 CC lib/json/json_parse.o 00:03:26.974 CC lib/json/json_util.o 00:03:26.974 CC lib/json/json_write.o 00:03:26.974 CC lib/idxd/idxd.o 00:03:26.974 CC lib/idxd/idxd_user.o 00:03:26.974 CC lib/idxd/idxd_kernel.o 00:03:26.974 CC lib/env_dpdk/env.o 00:03:26.974 CC lib/rdma_utils/rdma_utils.o 00:03:26.974 CC lib/env_dpdk/memory.o 00:03:26.974 CC lib/vmd/vmd.o 00:03:26.974 CC lib/env_dpdk/pci.o 00:03:26.974 CC lib/vmd/led.o 00:03:26.974 CC lib/env_dpdk/init.o 00:03:26.974 CC lib/conf/conf.o 00:03:26.974 CC lib/env_dpdk/threads.o 00:03:26.974 CC lib/env_dpdk/pci_ioat.o 00:03:26.974 CC lib/env_dpdk/pci_virtio.o 00:03:26.974 CC lib/env_dpdk/pci_vmd.o 00:03:26.974 CC lib/env_dpdk/pci_idxd.o 00:03:26.974 CC lib/env_dpdk/pci_event.o 00:03:26.974 CC lib/env_dpdk/sigbus_handler.o 00:03:26.974 CC lib/env_dpdk/pci_dpdk.o 00:03:26.974 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:26.974 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.233 LIB libspdk_conf.a 00:03:27.233 SO libspdk_conf.so.6.0 00:03:27.233 LIB libspdk_json.a 00:03:27.233 LIB libspdk_rdma_utils.a 00:03:27.233 SO libspdk_json.so.6.0 00:03:27.233 SO libspdk_rdma_utils.so.1.0 00:03:27.233 SYMLINK libspdk_conf.so 00:03:27.233 SYMLINK libspdk_json.so 00:03:27.233 SYMLINK libspdk_rdma_utils.so 00:03:27.497 LIB libspdk_idxd.a 00:03:27.497 LIB libspdk_vmd.a 00:03:27.497 SO libspdk_idxd.so.12.1 00:03:27.497 SO libspdk_vmd.so.6.0 00:03:27.497 SYMLINK libspdk_idxd.so 00:03:27.497 SYMLINK libspdk_vmd.so 00:03:27.497 CC lib/jsonrpc/jsonrpc_server.o 00:03:27.497 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:27.497 CC lib/jsonrpc/jsonrpc_client.o 00:03:27.497 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:27.497 CC lib/rdma_provider/common.o 00:03:27.497 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:27.758 LIB libspdk_rdma_provider.a 00:03:27.758 LIB libspdk_jsonrpc.a 00:03:27.758 SO libspdk_rdma_provider.so.7.0 00:03:27.758 SO libspdk_jsonrpc.so.6.0 00:03:27.758 SYMLINK libspdk_rdma_provider.so 00:03:27.758 SYMLINK libspdk_jsonrpc.so 00:03:27.758 LIB libspdk_env_dpdk.a 00:03:28.018 SO libspdk_env_dpdk.so.15.1 00:03:28.018 SYMLINK libspdk_env_dpdk.so 00:03:28.018 CC lib/rpc/rpc.o 00:03:28.277 LIB libspdk_rpc.a 00:03:28.277 SO libspdk_rpc.so.6.0 00:03:28.536 SYMLINK libspdk_rpc.so 00:03:28.795 CC lib/notify/notify.o 00:03:28.795 CC lib/notify/notify_rpc.o 00:03:28.795 CC lib/trace/trace.o 00:03:28.795 CC lib/trace/trace_flags.o 00:03:28.795 CC lib/trace/trace_rpc.o 00:03:28.795 CC lib/keyring/keyring.o 00:03:28.795 CC lib/keyring/keyring_rpc.o 00:03:28.795 LIB libspdk_notify.a 00:03:29.086 SO libspdk_notify.so.6.0 00:03:29.086 LIB libspdk_keyring.a 00:03:29.086 LIB libspdk_trace.a 00:03:29.086 SYMLINK libspdk_notify.so 00:03:29.086 SO libspdk_keyring.so.2.0 00:03:29.086 SO libspdk_trace.so.11.0 00:03:29.086 SYMLINK libspdk_keyring.so 00:03:29.086 SYMLINK libspdk_trace.so 00:03:29.346 CC lib/sock/sock.o 00:03:29.346 CC lib/thread/thread.o 00:03:29.346 CC lib/sock/sock_rpc.o 00:03:29.346 CC lib/thread/iobuf.o 00:03:29.606 LIB libspdk_sock.a 00:03:29.866 SO libspdk_sock.so.10.0 00:03:29.866 SYMLINK libspdk_sock.so 00:03:30.125 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.125 CC lib/nvme/nvme_ctrlr.o 00:03:30.125 CC lib/nvme/nvme_fabric.o 00:03:30.125 CC lib/nvme/nvme_ns_cmd.o 00:03:30.125 CC lib/nvme/nvme_ns.o 00:03:30.125 CC lib/nvme/nvme_pcie_common.o 00:03:30.125 CC lib/nvme/nvme_pcie.o 00:03:30.125 CC lib/nvme/nvme_qpair.o 00:03:30.125 CC lib/nvme/nvme.o 00:03:30.125 CC lib/nvme/nvme_quirks.o 00:03:30.125 CC lib/nvme/nvme_transport.o 00:03:30.125 CC lib/nvme/nvme_discovery.o 00:03:30.125 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.125 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.125 CC lib/nvme/nvme_tcp.o 00:03:30.125 CC lib/nvme/nvme_opal.o 00:03:30.125 CC lib/nvme/nvme_io_msg.o 00:03:30.125 CC lib/nvme/nvme_poll_group.o 00:03:30.125 CC lib/nvme/nvme_zns.o 00:03:30.125 CC lib/nvme/nvme_stubs.o 00:03:30.125 CC lib/nvme/nvme_auth.o 00:03:30.125 CC lib/nvme/nvme_cuse.o 00:03:30.125 CC lib/nvme/nvme_vfio_user.o 00:03:30.125 CC lib/nvme/nvme_rdma.o 00:03:30.384 LIB libspdk_thread.a 00:03:30.384 SO libspdk_thread.so.11.0 00:03:30.643 SYMLINK libspdk_thread.so 00:03:30.902 CC lib/accel/accel.o 00:03:30.902 CC lib/init/json_config.o 00:03:30.902 CC lib/accel/accel_rpc.o 00:03:30.902 CC lib/init/subsystem.o 00:03:30.902 CC lib/accel/accel_sw.o 00:03:30.902 CC lib/init/rpc.o 00:03:30.902 CC lib/init/subsystem_rpc.o 00:03:30.902 CC lib/virtio/virtio.o 00:03:30.902 CC lib/virtio/virtio_vhost_user.o 00:03:30.902 CC lib/virtio/virtio_vfio_user.o 00:03:30.902 CC lib/virtio/virtio_pci.o 00:03:30.902 CC lib/vfu_tgt/tgt_endpoint.o 00:03:30.902 CC lib/fsdev/fsdev.o 00:03:30.902 CC lib/vfu_tgt/tgt_rpc.o 00:03:30.902 CC lib/fsdev/fsdev_rpc.o 00:03:30.902 CC lib/fsdev/fsdev_io.o 00:03:30.902 CC lib/blob/blobstore.o 00:03:30.902 CC lib/blob/zeroes.o 00:03:30.902 CC lib/blob/request.o 00:03:30.902 CC lib/blob/blob_bs_dev.o 00:03:31.161 LIB libspdk_init.a 00:03:31.161 SO libspdk_init.so.6.0 00:03:31.161 LIB libspdk_virtio.a 00:03:31.161 LIB libspdk_vfu_tgt.a 00:03:31.161 SO libspdk_virtio.so.7.0 00:03:31.161 SYMLINK libspdk_init.so 00:03:31.161 SO libspdk_vfu_tgt.so.3.0 00:03:31.161 SYMLINK libspdk_virtio.so 00:03:31.161 SYMLINK libspdk_vfu_tgt.so 00:03:31.432 LIB libspdk_fsdev.a 00:03:31.432 SO libspdk_fsdev.so.2.0 00:03:31.433 CC lib/event/app.o 00:03:31.433 CC lib/event/reactor.o 00:03:31.433 CC lib/event/log_rpc.o 00:03:31.433 CC lib/event/app_rpc.o 00:03:31.433 CC lib/event/scheduler_static.o 00:03:31.433 SYMLINK libspdk_fsdev.so 00:03:31.692 LIB libspdk_accel.a 00:03:31.692 SO libspdk_accel.so.16.0 00:03:31.692 SYMLINK libspdk_accel.so 00:03:31.692 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:31.692 LIB libspdk_event.a 00:03:31.952 LIB libspdk_nvme.a 00:03:31.952 SO libspdk_event.so.14.0 00:03:31.952 SYMLINK libspdk_event.so 00:03:31.952 SO libspdk_nvme.so.15.0 00:03:31.952 CC lib/bdev/bdev.o 00:03:31.952 CC lib/bdev/bdev_rpc.o 00:03:31.952 CC lib/bdev/bdev_zone.o 00:03:31.952 CC lib/bdev/part.o 00:03:31.952 CC lib/bdev/scsi_nvme.o 00:03:32.212 SYMLINK libspdk_nvme.so 00:03:32.212 LIB libspdk_fuse_dispatcher.a 00:03:32.212 SO libspdk_fuse_dispatcher.so.1.0 00:03:32.470 SYMLINK libspdk_fuse_dispatcher.so 00:03:33.039 LIB libspdk_blob.a 00:03:33.039 SO libspdk_blob.so.12.0 00:03:33.298 SYMLINK libspdk_blob.so 00:03:33.556 CC lib/blobfs/blobfs.o 00:03:33.556 CC lib/blobfs/tree.o 00:03:33.556 CC lib/lvol/lvol.o 00:03:33.815 LIB libspdk_bdev.a 00:03:34.075 SO libspdk_bdev.so.17.0 00:03:34.075 SYMLINK libspdk_bdev.so 00:03:34.075 LIB libspdk_blobfs.a 00:03:34.075 SO libspdk_blobfs.so.11.0 00:03:34.334 LIB libspdk_lvol.a 00:03:34.334 SYMLINK libspdk_blobfs.so 00:03:34.334 SO libspdk_lvol.so.11.0 00:03:34.334 SYMLINK libspdk_lvol.so 00:03:34.334 CC lib/ftl/ftl_core.o 00:03:34.334 CC lib/nbd/nbd.o 00:03:34.334 CC lib/ftl/ftl_init.o 00:03:34.334 CC lib/nbd/nbd_rpc.o 00:03:34.334 CC lib/ftl/ftl_layout.o 00:03:34.334 CC lib/ftl/ftl_debug.o 00:03:34.334 CC lib/ftl/ftl_io.o 00:03:34.334 CC lib/ftl/ftl_sb.o 00:03:34.334 CC lib/ftl/ftl_l2p.o 00:03:34.334 CC lib/ftl/ftl_l2p_flat.o 00:03:34.334 CC lib/scsi/dev.o 00:03:34.334 CC lib/nvmf/ctrlr.o 00:03:34.334 CC lib/ftl/ftl_nv_cache.o 00:03:34.334 CC lib/nvmf/ctrlr_discovery.o 00:03:34.334 CC lib/scsi/lun.o 00:03:34.334 CC lib/ftl/ftl_band.o 00:03:34.334 CC lib/nvmf/ctrlr_bdev.o 00:03:34.334 CC lib/ublk/ublk.o 00:03:34.334 CC lib/scsi/port.o 00:03:34.334 CC lib/ftl/ftl_band_ops.o 00:03:34.334 CC lib/nvmf/subsystem.o 00:03:34.334 CC lib/scsi/scsi.o 00:03:34.334 CC lib/ftl/ftl_writer.o 00:03:34.334 CC lib/ublk/ublk_rpc.o 00:03:34.334 CC lib/nvmf/nvmf.o 00:03:34.334 CC lib/nvmf/nvmf_rpc.o 00:03:34.334 CC lib/ftl/ftl_rq.o 00:03:34.334 CC lib/scsi/scsi_bdev.o 00:03:34.334 CC lib/ftl/ftl_reloc.o 00:03:34.334 CC lib/ftl/ftl_l2p_cache.o 00:03:34.334 CC lib/scsi/scsi_pr.o 00:03:34.334 CC lib/nvmf/transport.o 00:03:34.334 CC lib/scsi/scsi_rpc.o 00:03:34.334 CC lib/scsi/task.o 00:03:34.334 CC lib/ftl/ftl_p2l.o 00:03:34.334 CC lib/nvmf/tcp.o 00:03:34.334 CC lib/ftl/ftl_p2l_log.o 00:03:34.334 CC lib/nvmf/stubs.o 00:03:34.334 CC lib/nvmf/mdns_server.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:34.334 CC lib/nvmf/rdma.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:34.334 CC lib/nvmf/vfio_user.o 00:03:34.334 CC lib/nvmf/auth.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:34.334 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:34.334 CC lib/ftl/utils/ftl_conf.o 00:03:34.334 CC lib/ftl/utils/ftl_md.o 00:03:34.334 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.334 CC lib/ftl/utils/ftl_mempool.o 00:03:34.334 CC lib/ftl/utils/ftl_property.o 00:03:34.334 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.334 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:34.334 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:34.334 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:34.334 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:34.334 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:34.334 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:34.334 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:34.334 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:34.334 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:34.334 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:34.334 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:34.334 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:34.334 CC lib/ftl/base/ftl_base_dev.o 00:03:34.334 CC lib/ftl/ftl_trace.o 00:03:34.334 CC lib/ftl/base/ftl_base_bdev.o 00:03:34.903 LIB libspdk_nbd.a 00:03:35.162 SO libspdk_nbd.so.7.0 00:03:35.162 LIB libspdk_scsi.a 00:03:35.162 SYMLINK libspdk_nbd.so 00:03:35.162 SO libspdk_scsi.so.9.0 00:03:35.162 SYMLINK libspdk_scsi.so 00:03:35.162 LIB libspdk_ublk.a 00:03:35.162 SO libspdk_ublk.so.3.0 00:03:35.420 SYMLINK libspdk_ublk.so 00:03:35.420 LIB libspdk_ftl.a 00:03:35.420 CC lib/vhost/vhost.o 00:03:35.420 CC lib/vhost/vhost_scsi.o 00:03:35.420 CC lib/vhost/vhost_rpc.o 00:03:35.420 CC lib/vhost/vhost_blk.o 00:03:35.420 CC lib/iscsi/conn.o 00:03:35.420 CC lib/vhost/rte_vhost_user.o 00:03:35.420 CC lib/iscsi/init_grp.o 00:03:35.420 CC lib/iscsi/iscsi.o 00:03:35.420 CC lib/iscsi/param.o 00:03:35.420 CC lib/iscsi/portal_grp.o 00:03:35.420 CC lib/iscsi/tgt_node.o 00:03:35.420 CC lib/iscsi/iscsi_subsystem.o 00:03:35.420 CC lib/iscsi/iscsi_rpc.o 00:03:35.420 CC lib/iscsi/task.o 00:03:35.420 SO libspdk_ftl.so.9.0 00:03:35.678 SYMLINK libspdk_ftl.so 00:03:36.246 LIB libspdk_nvmf.a 00:03:36.246 SO libspdk_nvmf.so.20.0 00:03:36.246 LIB libspdk_vhost.a 00:03:36.246 SYMLINK libspdk_nvmf.so 00:03:36.246 SO libspdk_vhost.so.8.0 00:03:36.505 SYMLINK libspdk_vhost.so 00:03:36.505 LIB libspdk_iscsi.a 00:03:36.505 SO libspdk_iscsi.so.8.0 00:03:36.764 SYMLINK libspdk_iscsi.so 00:03:37.332 CC module/env_dpdk/env_dpdk_rpc.o 00:03:37.332 CC module/vfu_device/vfu_virtio.o 00:03:37.332 CC module/vfu_device/vfu_virtio_blk.o 00:03:37.332 CC module/vfu_device/vfu_virtio_scsi.o 00:03:37.332 CC module/vfu_device/vfu_virtio_rpc.o 00:03:37.332 CC module/vfu_device/vfu_virtio_fs.o 00:03:37.332 CC module/blob/bdev/blob_bdev.o 00:03:37.332 CC module/accel/iaa/accel_iaa.o 00:03:37.333 CC module/accel/iaa/accel_iaa_rpc.o 00:03:37.333 CC module/accel/ioat/accel_ioat.o 00:03:37.333 CC module/accel/error/accel_error.o 00:03:37.333 LIB libspdk_env_dpdk_rpc.a 00:03:37.333 CC module/accel/ioat/accel_ioat_rpc.o 00:03:37.333 CC module/accel/error/accel_error_rpc.o 00:03:37.333 CC module/accel/dsa/accel_dsa_rpc.o 00:03:37.333 CC module/keyring/linux/keyring.o 00:03:37.333 CC module/keyring/file/keyring.o 00:03:37.333 CC module/sock/posix/posix.o 00:03:37.333 CC module/accel/dsa/accel_dsa.o 00:03:37.333 CC module/keyring/linux/keyring_rpc.o 00:03:37.333 CC module/keyring/file/keyring_rpc.o 00:03:37.333 CC module/scheduler/gscheduler/gscheduler.o 00:03:37.333 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:37.333 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:37.333 CC module/fsdev/aio/fsdev_aio.o 00:03:37.333 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:37.333 CC module/fsdev/aio/linux_aio_mgr.o 00:03:37.333 SO libspdk_env_dpdk_rpc.so.6.0 00:03:37.333 SYMLINK libspdk_env_dpdk_rpc.so 00:03:37.592 LIB libspdk_keyring_linux.a 00:03:37.592 LIB libspdk_scheduler_gscheduler.a 00:03:37.592 LIB libspdk_keyring_file.a 00:03:37.592 LIB libspdk_scheduler_dpdk_governor.a 00:03:37.592 LIB libspdk_accel_iaa.a 00:03:37.592 LIB libspdk_accel_ioat.a 00:03:37.592 SO libspdk_keyring_linux.so.1.0 00:03:37.592 SO libspdk_scheduler_gscheduler.so.4.0 00:03:37.592 LIB libspdk_accel_error.a 00:03:37.592 SO libspdk_keyring_file.so.2.0 00:03:37.592 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:37.592 SO libspdk_accel_ioat.so.6.0 00:03:37.592 LIB libspdk_scheduler_dynamic.a 00:03:37.592 SO libspdk_accel_iaa.so.3.0 00:03:37.592 LIB libspdk_blob_bdev.a 00:03:37.592 SO libspdk_accel_error.so.2.0 00:03:37.592 SYMLINK libspdk_keyring_linux.so 00:03:37.592 SYMLINK libspdk_scheduler_gscheduler.so 00:03:37.592 SO libspdk_scheduler_dynamic.so.4.0 00:03:37.592 SO libspdk_blob_bdev.so.12.0 00:03:37.592 SYMLINK libspdk_keyring_file.so 00:03:37.592 SYMLINK libspdk_accel_ioat.so 00:03:37.592 SYMLINK libspdk_accel_iaa.so 00:03:37.592 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:37.592 LIB libspdk_accel_dsa.a 00:03:37.592 SYMLINK libspdk_accel_error.so 00:03:37.592 SO libspdk_accel_dsa.so.5.0 00:03:37.592 SYMLINK libspdk_scheduler_dynamic.so 00:03:37.592 SYMLINK libspdk_blob_bdev.so 00:03:37.851 SYMLINK libspdk_accel_dsa.so 00:03:37.851 LIB libspdk_vfu_device.a 00:03:37.851 SO libspdk_vfu_device.so.3.0 00:03:37.851 SYMLINK libspdk_vfu_device.so 00:03:37.851 LIB libspdk_fsdev_aio.a 00:03:37.851 LIB libspdk_sock_posix.a 00:03:37.851 SO libspdk_fsdev_aio.so.1.0 00:03:38.110 SO libspdk_sock_posix.so.6.0 00:03:38.110 SYMLINK libspdk_fsdev_aio.so 00:03:38.110 SYMLINK libspdk_sock_posix.so 00:03:38.110 CC module/blobfs/bdev/blobfs_bdev.o 00:03:38.110 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:38.110 CC module/bdev/malloc/bdev_malloc.o 00:03:38.110 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:38.110 CC module/bdev/iscsi/bdev_iscsi.o 00:03:38.110 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:38.110 CC module/bdev/error/vbdev_error_rpc.o 00:03:38.110 CC module/bdev/error/vbdev_error.o 00:03:38.110 CC module/bdev/split/vbdev_split.o 00:03:38.110 CC module/bdev/split/vbdev_split_rpc.o 00:03:38.110 CC module/bdev/lvol/vbdev_lvol.o 00:03:38.110 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:38.110 CC module/bdev/gpt/gpt.o 00:03:38.110 CC module/bdev/delay/vbdev_delay.o 00:03:38.110 CC module/bdev/gpt/vbdev_gpt.o 00:03:38.110 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:38.110 CC module/bdev/null/bdev_null.o 00:03:38.110 CC module/bdev/null/bdev_null_rpc.o 00:03:38.110 CC module/bdev/nvme/bdev_nvme.o 00:03:38.110 CC module/bdev/aio/bdev_aio.o 00:03:38.110 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:38.110 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.110 CC module/bdev/ftl/bdev_ftl.o 00:03:38.110 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:38.110 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:38.110 CC module/bdev/nvme/nvme_rpc.o 00:03:38.110 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:38.110 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:38.110 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:38.110 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:38.110 CC module/bdev/passthru/vbdev_passthru.o 00:03:38.110 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.110 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.110 CC module/bdev/raid/bdev_raid_rpc.o 00:03:38.110 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:38.110 CC module/bdev/nvme/vbdev_opal.o 00:03:38.110 CC module/bdev/raid/bdev_raid.o 00:03:38.110 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:38.110 CC module/bdev/raid/bdev_raid_sb.o 00:03:38.110 CC module/bdev/raid/raid0.o 00:03:38.110 CC module/bdev/raid/concat.o 00:03:38.110 CC module/bdev/raid/raid1.o 00:03:38.369 LIB libspdk_blobfs_bdev.a 00:03:38.369 SO libspdk_blobfs_bdev.so.6.0 00:03:38.369 LIB libspdk_bdev_split.a 00:03:38.369 SO libspdk_bdev_split.so.6.0 00:03:38.369 SYMLINK libspdk_blobfs_bdev.so 00:03:38.369 LIB libspdk_bdev_null.a 00:03:38.369 LIB libspdk_bdev_ftl.a 00:03:38.369 LIB libspdk_bdev_error.a 00:03:38.627 LIB libspdk_bdev_passthru.a 00:03:38.627 SYMLINK libspdk_bdev_split.so 00:03:38.627 SO libspdk_bdev_ftl.so.6.0 00:03:38.627 SO libspdk_bdev_null.so.6.0 00:03:38.627 LIB libspdk_bdev_gpt.a 00:03:38.628 LIB libspdk_bdev_malloc.a 00:03:38.628 SO libspdk_bdev_gpt.so.6.0 00:03:38.628 SO libspdk_bdev_error.so.6.0 00:03:38.628 SO libspdk_bdev_passthru.so.6.0 00:03:38.628 LIB libspdk_bdev_zone_block.a 00:03:38.628 LIB libspdk_bdev_aio.a 00:03:38.628 SO libspdk_bdev_malloc.so.6.0 00:03:38.628 SYMLINK libspdk_bdev_ftl.so 00:03:38.628 LIB libspdk_bdev_delay.a 00:03:38.628 LIB libspdk_bdev_iscsi.a 00:03:38.628 SYMLINK libspdk_bdev_null.so 00:03:38.628 SO libspdk_bdev_zone_block.so.6.0 00:03:38.628 SO libspdk_bdev_aio.so.6.0 00:03:38.628 SYMLINK libspdk_bdev_gpt.so 00:03:38.628 SYMLINK libspdk_bdev_error.so 00:03:38.628 SO libspdk_bdev_delay.so.6.0 00:03:38.628 SO libspdk_bdev_iscsi.so.6.0 00:03:38.628 SYMLINK libspdk_bdev_passthru.so 00:03:38.628 SYMLINK libspdk_bdev_malloc.so 00:03:38.628 SYMLINK libspdk_bdev_zone_block.so 00:03:38.628 SYMLINK libspdk_bdev_aio.so 00:03:38.628 SYMLINK libspdk_bdev_delay.so 00:03:38.628 SYMLINK libspdk_bdev_iscsi.so 00:03:38.628 LIB libspdk_bdev_lvol.a 00:03:38.628 LIB libspdk_bdev_virtio.a 00:03:38.628 SO libspdk_bdev_lvol.so.6.0 00:03:38.628 SO libspdk_bdev_virtio.so.6.0 00:03:38.886 SYMLINK libspdk_bdev_lvol.so 00:03:38.886 SYMLINK libspdk_bdev_virtio.so 00:03:39.145 LIB libspdk_bdev_raid.a 00:03:39.145 SO libspdk_bdev_raid.so.6.0 00:03:39.145 SYMLINK libspdk_bdev_raid.so 00:03:40.083 LIB libspdk_bdev_nvme.a 00:03:40.083 SO libspdk_bdev_nvme.so.7.1 00:03:40.083 SYMLINK libspdk_bdev_nvme.so 00:03:41.022 CC module/event/subsystems/iobuf/iobuf.o 00:03:41.022 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:41.022 CC module/event/subsystems/vmd/vmd.o 00:03:41.022 CC module/event/subsystems/sock/sock.o 00:03:41.022 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.022 CC module/event/subsystems/scheduler/scheduler.o 00:03:41.022 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:41.022 CC module/event/subsystems/keyring/keyring.o 00:03:41.022 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:41.022 CC module/event/subsystems/fsdev/fsdev.o 00:03:41.022 LIB libspdk_event_scheduler.a 00:03:41.022 LIB libspdk_event_vfu_tgt.a 00:03:41.022 LIB libspdk_event_vhost_blk.a 00:03:41.022 LIB libspdk_event_fsdev.a 00:03:41.022 LIB libspdk_event_keyring.a 00:03:41.022 LIB libspdk_event_iobuf.a 00:03:41.022 LIB libspdk_event_sock.a 00:03:41.022 LIB libspdk_event_vmd.a 00:03:41.022 SO libspdk_event_vfu_tgt.so.3.0 00:03:41.022 SO libspdk_event_scheduler.so.4.0 00:03:41.022 SO libspdk_event_fsdev.so.1.0 00:03:41.022 SO libspdk_event_vhost_blk.so.3.0 00:03:41.022 SO libspdk_event_keyring.so.1.0 00:03:41.022 SO libspdk_event_vmd.so.6.0 00:03:41.022 SO libspdk_event_iobuf.so.3.0 00:03:41.022 SO libspdk_event_sock.so.5.0 00:03:41.022 SYMLINK libspdk_event_vfu_tgt.so 00:03:41.022 SYMLINK libspdk_event_scheduler.so 00:03:41.022 SYMLINK libspdk_event_fsdev.so 00:03:41.022 SYMLINK libspdk_event_vhost_blk.so 00:03:41.022 SYMLINK libspdk_event_keyring.so 00:03:41.022 SYMLINK libspdk_event_vmd.so 00:03:41.022 SYMLINK libspdk_event_sock.so 00:03:41.022 SYMLINK libspdk_event_iobuf.so 00:03:41.281 CC module/event/subsystems/accel/accel.o 00:03:41.540 LIB libspdk_event_accel.a 00:03:41.540 SO libspdk_event_accel.so.6.0 00:03:41.540 SYMLINK libspdk_event_accel.so 00:03:41.800 CC module/event/subsystems/bdev/bdev.o 00:03:42.059 LIB libspdk_event_bdev.a 00:03:42.059 SO libspdk_event_bdev.so.6.0 00:03:42.059 SYMLINK libspdk_event_bdev.so 00:03:42.627 CC module/event/subsystems/ublk/ublk.o 00:03:42.627 CC module/event/subsystems/scsi/scsi.o 00:03:42.627 CC module/event/subsystems/nbd/nbd.o 00:03:42.627 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.627 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.627 LIB libspdk_event_nbd.a 00:03:42.627 LIB libspdk_event_ublk.a 00:03:42.627 LIB libspdk_event_scsi.a 00:03:42.627 SO libspdk_event_ublk.so.3.0 00:03:42.627 SO libspdk_event_nbd.so.6.0 00:03:42.627 SO libspdk_event_scsi.so.6.0 00:03:42.627 LIB libspdk_event_nvmf.a 00:03:42.627 SYMLINK libspdk_event_ublk.so 00:03:42.627 SYMLINK libspdk_event_nbd.so 00:03:42.627 SYMLINK libspdk_event_scsi.so 00:03:42.627 SO libspdk_event_nvmf.so.6.0 00:03:42.886 SYMLINK libspdk_event_nvmf.so 00:03:43.146 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.146 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.146 LIB libspdk_event_vhost_scsi.a 00:03:43.146 LIB libspdk_event_iscsi.a 00:03:43.146 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.146 SO libspdk_event_iscsi.so.6.0 00:03:43.146 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.404 SYMLINK libspdk_event_iscsi.so 00:03:43.404 SO libspdk.so.6.0 00:03:43.404 SYMLINK libspdk.so 00:03:43.979 CC app/spdk_lspci/spdk_lspci.o 00:03:43.979 CC app/trace_record/trace_record.o 00:03:43.979 CXX app/trace/trace.o 00:03:43.979 CC app/spdk_nvme_perf/perf.o 00:03:43.979 CC app/spdk_top/spdk_top.o 00:03:43.979 CC test/rpc_client/rpc_client_test.o 00:03:43.979 CC app/spdk_nvme_discover/discovery_aer.o 00:03:43.979 TEST_HEADER include/spdk/accel_module.h 00:03:43.979 TEST_HEADER include/spdk/accel.h 00:03:43.979 TEST_HEADER include/spdk/barrier.h 00:03:43.979 TEST_HEADER include/spdk/assert.h 00:03:43.979 TEST_HEADER include/spdk/bdev_module.h 00:03:43.979 TEST_HEADER include/spdk/base64.h 00:03:43.979 TEST_HEADER include/spdk/bdev.h 00:03:43.979 CC app/spdk_nvme_identify/identify.o 00:03:43.979 TEST_HEADER include/spdk/bdev_zone.h 00:03:43.979 TEST_HEADER include/spdk/bit_array.h 00:03:43.979 TEST_HEADER include/spdk/bit_pool.h 00:03:43.979 TEST_HEADER include/spdk/blob_bdev.h 00:03:43.979 TEST_HEADER include/spdk/blobfs.h 00:03:43.979 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:43.979 TEST_HEADER include/spdk/conf.h 00:03:43.979 TEST_HEADER include/spdk/blob.h 00:03:43.979 TEST_HEADER include/spdk/config.h 00:03:43.979 TEST_HEADER include/spdk/cpuset.h 00:03:43.979 TEST_HEADER include/spdk/crc16.h 00:03:43.979 TEST_HEADER include/spdk/crc64.h 00:03:43.979 TEST_HEADER include/spdk/crc32.h 00:03:43.979 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:43.979 TEST_HEADER include/spdk/dif.h 00:03:43.979 TEST_HEADER include/spdk/endian.h 00:03:43.979 TEST_HEADER include/spdk/env_dpdk.h 00:03:43.979 TEST_HEADER include/spdk/dma.h 00:03:43.979 TEST_HEADER include/spdk/env.h 00:03:43.979 TEST_HEADER include/spdk/event.h 00:03:43.979 TEST_HEADER include/spdk/file.h 00:03:43.979 TEST_HEADER include/spdk/fd_group.h 00:03:43.979 TEST_HEADER include/spdk/fsdev.h 00:03:43.979 TEST_HEADER include/spdk/fd.h 00:03:43.979 TEST_HEADER include/spdk/fsdev_module.h 00:03:43.979 TEST_HEADER include/spdk/ftl.h 00:03:43.979 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:43.979 TEST_HEADER include/spdk/gpt_spec.h 00:03:43.979 TEST_HEADER include/spdk/hexlify.h 00:03:43.979 TEST_HEADER include/spdk/histogram_data.h 00:03:43.979 TEST_HEADER include/spdk/idxd.h 00:03:43.979 TEST_HEADER include/spdk/idxd_spec.h 00:03:43.979 TEST_HEADER include/spdk/init.h 00:03:43.979 TEST_HEADER include/spdk/ioat_spec.h 00:03:43.979 TEST_HEADER include/spdk/ioat.h 00:03:43.979 TEST_HEADER include/spdk/iscsi_spec.h 00:03:43.979 TEST_HEADER include/spdk/json.h 00:03:43.979 TEST_HEADER include/spdk/jsonrpc.h 00:03:43.979 TEST_HEADER include/spdk/keyring_module.h 00:03:43.979 TEST_HEADER include/spdk/keyring.h 00:03:43.979 TEST_HEADER include/spdk/likely.h 00:03:43.979 TEST_HEADER include/spdk/log.h 00:03:43.979 TEST_HEADER include/spdk/md5.h 00:03:43.979 TEST_HEADER include/spdk/lvol.h 00:03:43.979 CC app/spdk_dd/spdk_dd.o 00:03:43.979 TEST_HEADER include/spdk/memory.h 00:03:43.979 TEST_HEADER include/spdk/mmio.h 00:03:43.979 TEST_HEADER include/spdk/net.h 00:03:43.979 TEST_HEADER include/spdk/nbd.h 00:03:43.979 TEST_HEADER include/spdk/notify.h 00:03:43.979 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:43.979 TEST_HEADER include/spdk/nvme.h 00:03:43.979 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:43.979 TEST_HEADER include/spdk/nvme_intel.h 00:03:43.979 TEST_HEADER include/spdk/nvme_spec.h 00:03:43.979 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:43.979 TEST_HEADER include/spdk/nvme_zns.h 00:03:43.979 TEST_HEADER include/spdk/nvmf_spec.h 00:03:43.979 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:43.979 TEST_HEADER include/spdk/nvmf.h 00:03:43.979 TEST_HEADER include/spdk/opal.h 00:03:43.979 TEST_HEADER include/spdk/opal_spec.h 00:03:43.979 CC app/iscsi_tgt/iscsi_tgt.o 00:03:43.979 TEST_HEADER include/spdk/pci_ids.h 00:03:43.979 TEST_HEADER include/spdk/nvmf_transport.h 00:03:43.979 TEST_HEADER include/spdk/pipe.h 00:03:43.979 TEST_HEADER include/spdk/reduce.h 00:03:43.979 TEST_HEADER include/spdk/queue.h 00:03:43.979 TEST_HEADER include/spdk/rpc.h 00:03:43.979 TEST_HEADER include/spdk/scheduler.h 00:03:43.979 TEST_HEADER include/spdk/scsi_spec.h 00:03:43.979 TEST_HEADER include/spdk/scsi.h 00:03:43.979 TEST_HEADER include/spdk/sock.h 00:03:43.979 CC app/nvmf_tgt/nvmf_main.o 00:03:43.979 TEST_HEADER include/spdk/stdinc.h 00:03:43.979 TEST_HEADER include/spdk/string.h 00:03:43.979 TEST_HEADER include/spdk/trace_parser.h 00:03:43.979 TEST_HEADER include/spdk/thread.h 00:03:43.979 TEST_HEADER include/spdk/trace.h 00:03:43.979 TEST_HEADER include/spdk/tree.h 00:03:43.979 TEST_HEADER include/spdk/uuid.h 00:03:43.979 TEST_HEADER include/spdk/ublk.h 00:03:43.979 TEST_HEADER include/spdk/util.h 00:03:43.979 TEST_HEADER include/spdk/version.h 00:03:43.979 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:43.979 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:43.979 TEST_HEADER include/spdk/vhost.h 00:03:43.979 TEST_HEADER include/spdk/vmd.h 00:03:43.979 TEST_HEADER include/spdk/xor.h 00:03:43.979 TEST_HEADER include/spdk/zipf.h 00:03:43.979 CXX test/cpp_headers/accel_module.o 00:03:43.979 CXX test/cpp_headers/accel.o 00:03:43.979 CXX test/cpp_headers/assert.o 00:03:43.979 CXX test/cpp_headers/barrier.o 00:03:43.979 CXX test/cpp_headers/base64.o 00:03:43.979 CC app/spdk_tgt/spdk_tgt.o 00:03:43.979 CXX test/cpp_headers/bdev.o 00:03:43.979 CXX test/cpp_headers/bdev_module.o 00:03:43.979 CXX test/cpp_headers/bdev_zone.o 00:03:43.979 CXX test/cpp_headers/bit_array.o 00:03:43.979 CXX test/cpp_headers/bit_pool.o 00:03:43.979 CXX test/cpp_headers/blob_bdev.o 00:03:43.979 CXX test/cpp_headers/blobfs.o 00:03:43.979 CXX test/cpp_headers/blobfs_bdev.o 00:03:43.979 CXX test/cpp_headers/conf.o 00:03:43.979 CXX test/cpp_headers/blob.o 00:03:43.979 CXX test/cpp_headers/config.o 00:03:43.979 CXX test/cpp_headers/cpuset.o 00:03:43.979 CXX test/cpp_headers/crc16.o 00:03:43.979 CXX test/cpp_headers/crc32.o 00:03:43.979 CXX test/cpp_headers/crc64.o 00:03:43.979 CXX test/cpp_headers/dif.o 00:03:43.979 CXX test/cpp_headers/dma.o 00:03:43.979 CXX test/cpp_headers/endian.o 00:03:43.979 CXX test/cpp_headers/env_dpdk.o 00:03:43.979 CXX test/cpp_headers/env.o 00:03:43.979 CXX test/cpp_headers/fd_group.o 00:03:43.979 CXX test/cpp_headers/event.o 00:03:43.979 CXX test/cpp_headers/fd.o 00:03:43.979 CXX test/cpp_headers/file.o 00:03:43.979 CXX test/cpp_headers/ftl.o 00:03:43.980 CXX test/cpp_headers/fsdev_module.o 00:03:43.980 CXX test/cpp_headers/fsdev.o 00:03:43.980 CXX test/cpp_headers/gpt_spec.o 00:03:43.980 CXX test/cpp_headers/fuse_dispatcher.o 00:03:43.980 CXX test/cpp_headers/hexlify.o 00:03:43.980 CXX test/cpp_headers/histogram_data.o 00:03:43.980 CXX test/cpp_headers/idxd.o 00:03:43.980 CXX test/cpp_headers/init.o 00:03:43.980 CXX test/cpp_headers/ioat.o 00:03:43.980 CXX test/cpp_headers/idxd_spec.o 00:03:43.980 CXX test/cpp_headers/ioat_spec.o 00:03:43.980 CXX test/cpp_headers/iscsi_spec.o 00:03:43.980 CXX test/cpp_headers/json.o 00:03:43.980 CXX test/cpp_headers/keyring_module.o 00:03:43.980 CXX test/cpp_headers/keyring.o 00:03:43.980 CXX test/cpp_headers/jsonrpc.o 00:03:43.980 CXX test/cpp_headers/log.o 00:03:43.980 CXX test/cpp_headers/lvol.o 00:03:43.980 CXX test/cpp_headers/likely.o 00:03:43.980 CXX test/cpp_headers/mmio.o 00:03:43.980 CXX test/cpp_headers/memory.o 00:03:43.980 CXX test/cpp_headers/md5.o 00:03:43.980 CXX test/cpp_headers/nbd.o 00:03:43.980 CC examples/ioat/perf/perf.o 00:03:43.980 CXX test/cpp_headers/net.o 00:03:43.980 CXX test/cpp_headers/nvme.o 00:03:43.980 CXX test/cpp_headers/nvme_intel.o 00:03:43.980 CXX test/cpp_headers/notify.o 00:03:43.980 CXX test/cpp_headers/nvme_ocssd.o 00:03:43.980 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:43.980 CXX test/cpp_headers/nvme_spec.o 00:03:43.980 CXX test/cpp_headers/nvme_zns.o 00:03:43.980 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.980 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:43.980 CXX test/cpp_headers/nvmf.o 00:03:43.980 CXX test/cpp_headers/nvmf_spec.o 00:03:43.980 CXX test/cpp_headers/nvmf_transport.o 00:03:43.980 CXX test/cpp_headers/opal.o 00:03:43.980 CC examples/ioat/verify/verify.o 00:03:43.980 CC examples/util/zipf/zipf.o 00:03:43.980 CC test/thread/poller_perf/poller_perf.o 00:03:43.980 CC test/app/stub/stub.o 00:03:43.980 CC test/app/jsoncat/jsoncat.o 00:03:43.980 CC test/app/histogram_perf/histogram_perf.o 00:03:43.980 CC test/env/pci/pci_ut.o 00:03:43.980 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:43.980 CC test/env/vtophys/vtophys.o 00:03:43.980 CC test/env/memory/memory_ut.o 00:03:44.247 CC app/fio/nvme/fio_plugin.o 00:03:44.247 CC test/dma/test_dma/test_dma.o 00:03:44.247 LINK spdk_lspci 00:03:44.247 CC test/app/bdev_svc/bdev_svc.o 00:03:44.247 CC app/fio/bdev/fio_plugin.o 00:03:44.247 LINK rpc_client_test 00:03:44.247 LINK interrupt_tgt 00:03:44.511 LINK spdk_nvme_discover 00:03:44.511 LINK nvmf_tgt 00:03:44.511 LINK iscsi_tgt 00:03:44.511 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.512 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.512 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.512 LINK zipf 00:03:44.512 LINK histogram_perf 00:03:44.512 CXX test/cpp_headers/opal_spec.o 00:03:44.512 LINK spdk_tgt 00:03:44.512 LINK env_dpdk_post_init 00:03:44.512 LINK vtophys 00:03:44.512 CXX test/cpp_headers/pci_ids.o 00:03:44.512 LINK stub 00:03:44.512 CXX test/cpp_headers/pipe.o 00:03:44.512 CXX test/cpp_headers/queue.o 00:03:44.512 CXX test/cpp_headers/reduce.o 00:03:44.512 CXX test/cpp_headers/rpc.o 00:03:44.512 CXX test/cpp_headers/scheduler.o 00:03:44.512 CXX test/cpp_headers/scsi.o 00:03:44.512 CXX test/cpp_headers/scsi_spec.o 00:03:44.512 CXX test/cpp_headers/sock.o 00:03:44.512 CXX test/cpp_headers/stdinc.o 00:03:44.512 CXX test/cpp_headers/string.o 00:03:44.512 CXX test/cpp_headers/thread.o 00:03:44.512 CXX test/cpp_headers/trace.o 00:03:44.512 CXX test/cpp_headers/trace_parser.o 00:03:44.512 CXX test/cpp_headers/tree.o 00:03:44.770 CXX test/cpp_headers/ublk.o 00:03:44.770 CXX test/cpp_headers/util.o 00:03:44.771 CXX test/cpp_headers/uuid.o 00:03:44.771 CXX test/cpp_headers/version.o 00:03:44.771 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.771 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.771 LINK spdk_trace_record 00:03:44.771 CXX test/cpp_headers/vhost.o 00:03:44.771 CXX test/cpp_headers/vmd.o 00:03:44.771 CXX test/cpp_headers/xor.o 00:03:44.771 LINK poller_perf 00:03:44.771 CXX test/cpp_headers/zipf.o 00:03:44.771 LINK jsoncat 00:03:44.771 LINK ioat_perf 00:03:44.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:44.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:44.771 LINK spdk_trace 00:03:44.771 LINK bdev_svc 00:03:44.771 LINK verify 00:03:44.771 LINK spdk_dd 00:03:44.771 LINK pci_ut 00:03:45.028 LINK test_dma 00:03:45.028 CC examples/idxd/perf/perf.o 00:03:45.028 CC examples/sock/hello_world/hello_sock.o 00:03:45.028 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.028 CC examples/vmd/led/led.o 00:03:45.028 CC examples/thread/thread/thread_ex.o 00:03:45.028 LINK nvme_fuzz 00:03:45.028 LINK spdk_nvme 00:03:45.028 CC app/vhost/vhost.o 00:03:45.028 LINK spdk_bdev 00:03:45.028 CC test/event/reactor_perf/reactor_perf.o 00:03:45.028 CC test/event/event_perf/event_perf.o 00:03:45.285 CC test/event/reactor/reactor.o 00:03:45.285 CC test/event/app_repeat/app_repeat.o 00:03:45.285 CC test/event/scheduler/scheduler.o 00:03:45.285 LINK spdk_top 00:03:45.285 LINK lsvmd 00:03:45.285 LINK vhost_fuzz 00:03:45.285 LINK spdk_nvme_identify 00:03:45.285 LINK led 00:03:45.285 LINK spdk_nvme_perf 00:03:45.285 LINK hello_sock 00:03:45.285 LINK event_perf 00:03:45.285 LINK mem_callbacks 00:03:45.285 LINK reactor_perf 00:03:45.285 LINK reactor 00:03:45.285 LINK idxd_perf 00:03:45.285 LINK thread 00:03:45.285 LINK vhost 00:03:45.285 LINK app_repeat 00:03:45.543 LINK scheduler 00:03:45.543 CC test/nvme/reset/reset.o 00:03:45.543 CC test/nvme/e2edp/nvme_dp.o 00:03:45.543 CC test/nvme/aer/aer.o 00:03:45.543 CC test/nvme/cuse/cuse.o 00:03:45.543 CC test/nvme/connect_stress/connect_stress.o 00:03:45.543 CC test/nvme/overhead/overhead.o 00:03:45.543 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:45.543 CC test/nvme/boot_partition/boot_partition.o 00:03:45.543 CC test/nvme/simple_copy/simple_copy.o 00:03:45.543 CC test/nvme/reserve/reserve.o 00:03:45.543 CC test/nvme/fused_ordering/fused_ordering.o 00:03:45.543 CC test/nvme/err_injection/err_injection.o 00:03:45.543 CC test/nvme/compliance/nvme_compliance.o 00:03:45.543 CC test/nvme/startup/startup.o 00:03:45.543 CC test/accel/dif/dif.o 00:03:45.543 CC test/nvme/sgl/sgl.o 00:03:45.543 CC test/nvme/fdp/fdp.o 00:03:45.543 CC test/blobfs/mkfs/mkfs.o 00:03:45.543 CC test/lvol/esnap/esnap.o 00:03:45.543 LINK memory_ut 00:03:45.803 CC examples/nvme/hello_world/hello_world.o 00:03:45.803 CC examples/nvme/arbitration/arbitration.o 00:03:45.803 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:45.803 CC examples/nvme/reconnect/reconnect.o 00:03:45.803 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:45.803 CC examples/nvme/abort/abort.o 00:03:45.803 LINK boot_partition 00:03:45.803 CC examples/nvme/hotplug/hotplug.o 00:03:45.803 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:45.803 LINK connect_stress 00:03:45.803 LINK startup 00:03:45.803 LINK err_injection 00:03:45.803 LINK fused_ordering 00:03:45.803 LINK doorbell_aers 00:03:45.803 LINK reserve 00:03:45.803 LINK simple_copy 00:03:45.803 LINK mkfs 00:03:45.803 LINK aer 00:03:45.803 LINK reset 00:03:45.803 LINK sgl 00:03:45.803 LINK nvme_dp 00:03:45.803 LINK overhead 00:03:45.803 CC examples/accel/perf/accel_perf.o 00:03:45.803 LINK nvme_compliance 00:03:45.803 LINK fdp 00:03:45.803 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:45.803 CC examples/blob/cli/blobcli.o 00:03:45.803 CC examples/blob/hello_world/hello_blob.o 00:03:46.062 LINK cmb_copy 00:03:46.062 LINK pmr_persistence 00:03:46.062 LINK hello_world 00:03:46.062 LINK hotplug 00:03:46.062 LINK arbitration 00:03:46.062 LINK reconnect 00:03:46.062 LINK abort 00:03:46.062 LINK iscsi_fuzz 00:03:46.062 LINK nvme_manage 00:03:46.062 LINK dif 00:03:46.062 LINK hello_blob 00:03:46.321 LINK hello_fsdev 00:03:46.321 LINK accel_perf 00:03:46.321 LINK blobcli 00:03:46.581 LINK cuse 00:03:46.581 CC test/bdev/bdevio/bdevio.o 00:03:46.840 CC examples/bdev/hello_world/hello_bdev.o 00:03:46.840 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.099 LINK hello_bdev 00:03:47.099 LINK bdevio 00:03:47.358 LINK bdevperf 00:03:47.926 CC examples/nvmf/nvmf/nvmf.o 00:03:48.186 LINK nvmf 00:03:49.124 LINK esnap 00:03:49.383 00:03:49.383 real 0m55.819s 00:03:49.383 user 8m17.265s 00:03:49.383 sys 3m47.141s 00:03:49.383 05:25:37 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:49.383 05:25:37 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.383 ************************************ 00:03:49.383 END TEST make 00:03:49.383 ************************************ 00:03:49.643 05:25:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.643 05:25:37 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.643 05:25:37 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.643 05:25:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.643 05:25:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.643 05:25:37 -- pm/common@44 -- $ pid=1487158 00:03:49.643 05:25:37 -- pm/common@50 -- $ kill -TERM 1487158 00:03:49.643 05:25:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.643 05:25:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.643 05:25:37 -- pm/common@44 -- $ pid=1487160 00:03:49.643 05:25:37 -- pm/common@50 -- $ kill -TERM 1487160 00:03:49.643 05:25:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.644 05:25:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:49.644 05:25:37 -- pm/common@44 -- $ pid=1487162 00:03:49.644 05:25:37 -- pm/common@50 -- $ kill -TERM 1487162 00:03:49.644 05:25:37 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.644 05:25:37 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:49.644 05:25:37 -- pm/common@44 -- $ pid=1487186 00:03:49.644 05:25:37 -- pm/common@50 -- $ sudo -E kill -TERM 1487186 00:03:49.644 05:25:37 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:49.644 05:25:37 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:49.644 05:25:37 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:49.644 05:25:37 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:49.644 05:25:37 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:49.644 05:25:37 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:49.644 05:25:37 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.644 05:25:37 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.644 05:25:37 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.644 05:25:37 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.644 05:25:37 -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.644 05:25:37 -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.644 05:25:37 -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.644 05:25:37 -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.644 05:25:37 -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.644 05:25:37 -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.644 05:25:37 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.644 05:25:37 -- scripts/common.sh@344 -- # case "$op" in 00:03:49.644 05:25:37 -- scripts/common.sh@345 -- # : 1 00:03:49.644 05:25:37 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.644 05:25:37 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.644 05:25:37 -- scripts/common.sh@365 -- # decimal 1 00:03:49.644 05:25:37 -- scripts/common.sh@353 -- # local d=1 00:03:49.644 05:25:37 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.644 05:25:37 -- scripts/common.sh@355 -- # echo 1 00:03:49.644 05:25:37 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.644 05:25:37 -- scripts/common.sh@366 -- # decimal 2 00:03:49.644 05:25:37 -- scripts/common.sh@353 -- # local d=2 00:03:49.644 05:25:37 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.644 05:25:37 -- scripts/common.sh@355 -- # echo 2 00:03:49.644 05:25:37 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.644 05:25:37 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.644 05:25:37 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.644 05:25:37 -- scripts/common.sh@368 -- # return 0 00:03:49.644 05:25:37 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.644 05:25:37 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.644 --rc genhtml_branch_coverage=1 00:03:49.644 --rc genhtml_function_coverage=1 00:03:49.644 --rc genhtml_legend=1 00:03:49.644 --rc geninfo_all_blocks=1 00:03:49.644 --rc geninfo_unexecuted_blocks=1 00:03:49.644 00:03:49.644 ' 00:03:49.644 05:25:37 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.644 --rc genhtml_branch_coverage=1 00:03:49.644 --rc genhtml_function_coverage=1 00:03:49.644 --rc genhtml_legend=1 00:03:49.644 --rc geninfo_all_blocks=1 00:03:49.644 --rc geninfo_unexecuted_blocks=1 00:03:49.644 00:03:49.644 ' 00:03:49.644 05:25:37 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.644 --rc genhtml_branch_coverage=1 00:03:49.644 --rc genhtml_function_coverage=1 00:03:49.644 --rc genhtml_legend=1 00:03:49.644 --rc geninfo_all_blocks=1 00:03:49.644 --rc geninfo_unexecuted_blocks=1 00:03:49.644 00:03:49.644 ' 00:03:49.644 05:25:37 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:49.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.644 --rc genhtml_branch_coverage=1 00:03:49.644 --rc genhtml_function_coverage=1 00:03:49.644 --rc genhtml_legend=1 00:03:49.644 --rc geninfo_all_blocks=1 00:03:49.644 --rc geninfo_unexecuted_blocks=1 00:03:49.644 00:03:49.644 ' 00:03:49.644 05:25:37 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:49.644 05:25:37 -- nvmf/common.sh@7 -- # uname -s 00:03:49.644 05:25:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.644 05:25:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.644 05:25:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.644 05:25:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.644 05:25:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.644 05:25:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.644 05:25:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.644 05:25:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.644 05:25:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.644 05:25:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.644 05:25:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:49.644 05:25:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:49.644 05:25:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.644 05:25:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.644 05:25:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:49.644 05:25:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.644 05:25:37 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:49.644 05:25:37 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.644 05:25:37 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.644 05:25:37 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.644 05:25:37 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.644 05:25:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.644 05:25:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.644 05:25:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.644 05:25:37 -- paths/export.sh@5 -- # export PATH 00:03:49.644 05:25:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.644 05:25:37 -- nvmf/common.sh@51 -- # : 0 00:03:49.644 05:25:37 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.644 05:25:37 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.644 05:25:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.644 05:25:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.644 05:25:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.644 05:25:37 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.644 05:25:37 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.644 05:25:37 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.644 05:25:37 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.644 05:25:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.644 05:25:37 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.644 05:25:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.644 05:25:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.644 05:25:37 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:49.644 05:25:37 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.644 05:25:37 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:49.644 05:25:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.644 05:25:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.644 05:25:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.644 05:25:37 -- spdk/autotest.sh@48 -- # udevadm_pid=1549655 00:03:49.644 05:25:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.644 05:25:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.644 05:25:37 -- pm/common@17 -- # local monitor 00:03:49.644 05:25:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.644 05:25:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.904 05:25:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.904 05:25:37 -- pm/common@21 -- # date +%s 00:03:49.904 05:25:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.904 05:25:37 -- pm/common@21 -- # date +%s 00:03:49.904 05:25:37 -- pm/common@25 -- # sleep 1 00:03:49.904 05:25:37 -- pm/common@21 -- # date +%s 00:03:49.904 05:25:37 -- pm/common@21 -- # date +%s 00:03:49.904 05:25:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681537 00:03:49.904 05:25:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681537 00:03:49.904 05:25:37 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681537 00:03:49.904 05:25:37 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732681537 00:03:49.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681537_collect-vmstat.pm.log 00:03:49.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681537_collect-cpu-load.pm.log 00:03:49.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681537_collect-cpu-temp.pm.log 00:03:49.904 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732681537_collect-bmc-pm.bmc.pm.log 00:03:50.842 05:25:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.842 05:25:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.842 05:25:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.842 05:25:38 -- common/autotest_common.sh@10 -- # set +x 00:03:50.842 05:25:38 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.842 05:25:38 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:50.842 05:25:38 -- common/autotest_common.sh@10 -- # set +x 00:03:50.842 05:25:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:50.842 05:25:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.842 05:25:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.842 05:25:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:50.842 05:25:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.842 05:25:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.842 05:25:38 -- common/autotest_common.sh@1457 -- # uname 00:03:50.842 05:25:38 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:50.842 05:25:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.842 05:25:38 -- common/autotest_common.sh@1477 -- # uname 00:03:50.842 05:25:38 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:50.842 05:25:38 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:50.842 05:25:38 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:50.842 lcov: LCOV version 1.15 00:03:50.842 05:25:38 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:03.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:03.055 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:17.943 05:26:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:17.943 05:26:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.943 05:26:03 -- common/autotest_common.sh@10 -- # set +x 00:04:17.943 05:26:03 -- spdk/autotest.sh@78 -- # rm -f 00:04:17.943 05:26:03 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.512 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:18.512 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:18.512 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:18.772 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:18.772 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:18.772 05:26:06 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:18.772 05:26:06 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:18.772 05:26:06 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:18.772 05:26:06 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:18.772 05:26:06 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:18.772 05:26:06 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:18.772 05:26:06 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:18.772 05:26:06 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.772 05:26:06 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:18.772 05:26:06 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:18.772 05:26:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.772 05:26:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.772 05:26:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:18.772 05:26:06 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:18.772 05:26:06 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.772 No valid GPT data, bailing 00:04:18.772 05:26:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.772 05:26:06 -- scripts/common.sh@394 -- # pt= 00:04:18.772 05:26:06 -- scripts/common.sh@395 -- # return 1 00:04:18.772 05:26:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.772 1+0 records in 00:04:18.772 1+0 records out 00:04:18.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00173676 s, 604 MB/s 00:04:18.772 05:26:06 -- spdk/autotest.sh@105 -- # sync 00:04:18.772 05:26:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.772 05:26:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.772 05:26:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.354 05:26:12 -- spdk/autotest.sh@111 -- # uname -s 00:04:25.354 05:26:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:25.354 05:26:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:25.354 05:26:12 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:27.259 Hugepages 00:04:27.259 node hugesize free / total 00:04:27.259 node0 1048576kB 0 / 0 00:04:27.259 node0 2048kB 0 / 0 00:04:27.259 node1 1048576kB 0 / 0 00:04:27.259 node1 2048kB 0 / 0 00:04:27.259 00:04:27.259 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.259 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:27.259 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:27.259 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:27.259 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:27.259 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:27.259 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:27.259 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:27.259 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:27.259 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:27.259 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:27.259 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:27.259 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:27.259 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:27.259 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:27.259 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:27.259 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:27.259 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:27.259 05:26:15 -- spdk/autotest.sh@117 -- # uname -s 00:04:27.259 05:26:15 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:27.259 05:26:15 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:27.259 05:26:15 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.549 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.549 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:31.485 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.744 05:26:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:32.681 05:26:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:32.681 05:26:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:32.681 05:26:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.681 05:26:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:32.681 05:26:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.681 05:26:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.681 05:26:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.681 05:26:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:32.681 05:26:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.681 05:26:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:32.681 05:26:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:32.681 05:26:20 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.967 Waiting for block devices as requested 00:04:35.967 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:35.967 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:35.967 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:35.967 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:35.967 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:35.967 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:35.967 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.226 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.226 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.226 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:36.485 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:36.485 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:36.485 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:36.485 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:36.744 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.744 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.745 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:37.004 05:26:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:37.004 05:26:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:37.004 05:26:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:37.004 05:26:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:37.004 05:26:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:37.004 05:26:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:37.004 05:26:24 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:37.004 05:26:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:37.004 05:26:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:37.004 05:26:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:37.004 05:26:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:37.004 05:26:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:37.004 05:26:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:37.004 05:26:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:37.004 05:26:24 -- common/autotest_common.sh@1543 -- # continue 00:04:37.004 05:26:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:37.004 05:26:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.004 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:04:37.004 05:26:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:37.004 05:26:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.004 05:26:24 -- common/autotest_common.sh@10 -- # set +x 00:04:37.004 05:26:24 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.299 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.299 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:41.679 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.679 05:26:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:41.679 05:26:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.679 05:26:29 -- common/autotest_common.sh@10 -- # set +x 00:04:41.679 05:26:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:41.679 05:26:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:41.679 05:26:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:41.679 05:26:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:41.679 05:26:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:41.679 05:26:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:41.679 05:26:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:41.679 05:26:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:41.679 05:26:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:41.679 05:26:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:41.679 05:26:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.679 05:26:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:41.679 05:26:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:41.679 05:26:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:41.679 05:26:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:41.679 05:26:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:41.679 05:26:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:41.679 05:26:29 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:41.679 05:26:29 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:41.679 05:26:29 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:41.679 05:26:29 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:41.679 05:26:29 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:41.679 05:26:29 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:41.679 05:26:29 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1563884 00:04:41.679 05:26:29 -- common/autotest_common.sh@1585 -- # waitforlisten 1563884 00:04:41.679 05:26:29 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.679 05:26:29 -- common/autotest_common.sh@835 -- # '[' -z 1563884 ']' 00:04:41.679 05:26:29 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.679 05:26:29 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.679 05:26:29 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.679 05:26:29 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.679 05:26:29 -- common/autotest_common.sh@10 -- # set +x 00:04:41.679 [2024-11-27 05:26:29.602551] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:04:41.679 [2024-11-27 05:26:29.602604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563884 ] 00:04:41.679 [2024-11-27 05:26:29.678213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.938 [2024-11-27 05:26:29.720383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.938 05:26:29 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.938 05:26:29 -- common/autotest_common.sh@868 -- # return 0 00:04:41.938 05:26:29 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:41.938 05:26:29 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:41.938 05:26:29 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:45.229 nvme0n1 00:04:45.229 05:26:32 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:45.229 [2024-11-27 05:26:33.119465] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:45.229 request: 00:04:45.229 { 00:04:45.229 "nvme_ctrlr_name": "nvme0", 00:04:45.229 "password": "test", 00:04:45.229 "method": "bdev_nvme_opal_revert", 00:04:45.229 "req_id": 1 00:04:45.229 } 00:04:45.229 Got JSON-RPC error response 00:04:45.229 response: 00:04:45.229 { 00:04:45.229 "code": -32602, 00:04:45.229 "message": "Invalid parameters" 00:04:45.229 } 00:04:45.229 05:26:33 -- common/autotest_common.sh@1591 -- # true 00:04:45.229 05:26:33 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:45.229 05:26:33 -- common/autotest_common.sh@1595 -- # killprocess 1563884 00:04:45.229 05:26:33 -- common/autotest_common.sh@954 -- # '[' -z 1563884 ']' 00:04:45.229 05:26:33 -- common/autotest_common.sh@958 -- # kill -0 1563884 00:04:45.229 05:26:33 -- common/autotest_common.sh@959 -- # uname 00:04:45.229 05:26:33 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.229 05:26:33 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563884 00:04:45.229 05:26:33 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.229 05:26:33 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.229 05:26:33 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563884' 00:04:45.229 killing process with pid 1563884 00:04:45.229 05:26:33 -- common/autotest_common.sh@973 -- # kill 1563884 00:04:45.229 05:26:33 -- common/autotest_common.sh@978 -- # wait 1563884 00:04:47.766 05:26:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:47.766 05:26:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:47.766 05:26:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:47.766 05:26:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:47.766 05:26:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:47.766 05:26:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.766 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:04:47.766 05:26:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:47.766 05:26:35 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.766 05:26:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.766 05:26:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.766 05:26:35 -- common/autotest_common.sh@10 -- # set +x 00:04:47.766 ************************************ 00:04:47.766 START TEST env 00:04:47.766 ************************************ 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.766 * Looking for test storage... 00:04:47.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.766 05:26:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.766 05:26:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.766 05:26:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.766 05:26:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.766 05:26:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.766 05:26:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.766 05:26:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.766 05:26:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.766 05:26:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.766 05:26:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.766 05:26:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.766 05:26:35 env -- scripts/common.sh@344 -- # case "$op" in 00:04:47.766 05:26:35 env -- scripts/common.sh@345 -- # : 1 00:04:47.766 05:26:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.766 05:26:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.766 05:26:35 env -- scripts/common.sh@365 -- # decimal 1 00:04:47.766 05:26:35 env -- scripts/common.sh@353 -- # local d=1 00:04:47.766 05:26:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.766 05:26:35 env -- scripts/common.sh@355 -- # echo 1 00:04:47.766 05:26:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.766 05:26:35 env -- scripts/common.sh@366 -- # decimal 2 00:04:47.766 05:26:35 env -- scripts/common.sh@353 -- # local d=2 00:04:47.766 05:26:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.766 05:26:35 env -- scripts/common.sh@355 -- # echo 2 00:04:47.766 05:26:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.766 05:26:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.766 05:26:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.766 05:26:35 env -- scripts/common.sh@368 -- # return 0 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.766 --rc genhtml_branch_coverage=1 00:04:47.766 --rc genhtml_function_coverage=1 00:04:47.766 --rc genhtml_legend=1 00:04:47.766 --rc geninfo_all_blocks=1 00:04:47.766 --rc geninfo_unexecuted_blocks=1 00:04:47.766 00:04:47.766 ' 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.766 --rc genhtml_branch_coverage=1 00:04:47.766 --rc genhtml_function_coverage=1 00:04:47.766 --rc genhtml_legend=1 00:04:47.766 --rc geninfo_all_blocks=1 00:04:47.766 --rc geninfo_unexecuted_blocks=1 00:04:47.766 00:04:47.766 ' 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.766 --rc genhtml_branch_coverage=1 00:04:47.766 --rc genhtml_function_coverage=1 00:04:47.766 --rc genhtml_legend=1 00:04:47.766 --rc geninfo_all_blocks=1 00:04:47.766 --rc geninfo_unexecuted_blocks=1 00:04:47.766 00:04:47.766 ' 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.766 --rc genhtml_branch_coverage=1 00:04:47.766 --rc genhtml_function_coverage=1 00:04:47.766 --rc genhtml_legend=1 00:04:47.766 --rc geninfo_all_blocks=1 00:04:47.766 --rc geninfo_unexecuted_blocks=1 00:04:47.766 00:04:47.766 ' 00:04:47.766 05:26:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.766 05:26:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.766 05:26:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.766 ************************************ 00:04:47.766 START TEST env_memory 00:04:47.766 ************************************ 00:04:47.766 05:26:35 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:47.766 00:04:47.766 00:04:47.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.766 http://cunit.sourceforge.net/ 00:04:47.766 00:04:47.766 00:04:47.766 Suite: memory 00:04:47.766 Test: alloc and free memory map ...[2024-11-27 05:26:35.657843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:47.766 passed 00:04:47.766 Test: mem map translation ...[2024-11-27 05:26:35.675493] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:47.766 [2024-11-27 05:26:35.675516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:47.766 [2024-11-27 05:26:35.675547] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:47.766 [2024-11-27 05:26:35.675553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:47.766 passed 00:04:47.766 Test: mem map registration ...[2024-11-27 05:26:35.711100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:47.766 [2024-11-27 05:26:35.711113] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:47.766 passed 00:04:47.766 Test: mem map adjacent registrations ...passed 00:04:47.766 00:04:47.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.766 suites 1 1 n/a 0 0 00:04:47.766 tests 4 4 4 0 0 00:04:47.766 asserts 152 152 152 0 n/a 00:04:47.766 00:04:47.766 Elapsed time = 0.130 seconds 00:04:47.766 00:04:47.766 real 0m0.143s 00:04:47.766 user 0m0.137s 00:04:47.766 sys 0m0.006s 00:04:47.766 05:26:35 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.766 05:26:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:47.766 ************************************ 00:04:47.766 END TEST env_memory 00:04:47.766 ************************************ 00:04:48.026 05:26:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.026 05:26:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.026 05:26:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.026 05:26:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.026 ************************************ 00:04:48.026 START TEST env_vtophys 00:04:48.026 ************************************ 00:04:48.026 05:26:35 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.026 EAL: lib.eal log level changed from notice to debug 00:04:48.026 EAL: Detected lcore 0 as core 0 on socket 0 00:04:48.026 EAL: Detected lcore 1 as core 1 on socket 0 00:04:48.026 EAL: Detected lcore 2 as core 2 on socket 0 00:04:48.026 EAL: Detected lcore 3 as core 3 on socket 0 00:04:48.026 EAL: Detected lcore 4 as core 4 on socket 0 00:04:48.026 EAL: Detected lcore 5 as core 5 on socket 0 00:04:48.026 EAL: Detected lcore 6 as core 6 on socket 0 00:04:48.026 EAL: Detected lcore 7 as core 8 on socket 0 00:04:48.026 EAL: Detected lcore 8 as core 9 on socket 0 00:04:48.026 EAL: Detected lcore 9 as core 10 on socket 0 00:04:48.026 EAL: Detected lcore 10 as core 11 on socket 0 00:04:48.026 EAL: Detected lcore 11 as core 12 on socket 0 00:04:48.026 EAL: Detected lcore 12 as core 13 on socket 0 00:04:48.026 EAL: Detected lcore 13 as core 16 on socket 0 00:04:48.026 EAL: Detected lcore 14 as core 17 on socket 0 00:04:48.026 EAL: Detected lcore 15 as core 18 on socket 0 00:04:48.026 EAL: Detected lcore 16 as core 19 on socket 0 00:04:48.026 EAL: Detected lcore 17 as core 20 on socket 0 00:04:48.026 EAL: Detected lcore 18 as core 21 on socket 0 00:04:48.026 EAL: Detected lcore 19 as core 25 on socket 0 00:04:48.026 EAL: Detected lcore 20 as core 26 on socket 0 00:04:48.026 EAL: Detected lcore 21 as core 27 on socket 0 00:04:48.026 EAL: Detected lcore 22 as core 28 on socket 0 00:04:48.026 EAL: Detected lcore 23 as core 29 on socket 0 00:04:48.026 EAL: Detected lcore 24 as core 0 on socket 1 00:04:48.026 EAL: Detected lcore 25 as core 1 on socket 1 00:04:48.026 EAL: Detected lcore 26 as core 2 on socket 1 00:04:48.026 EAL: Detected lcore 27 as core 3 on socket 1 00:04:48.026 EAL: Detected lcore 28 as core 4 on socket 1 00:04:48.026 EAL: Detected lcore 29 as core 5 on socket 1 00:04:48.026 EAL: Detected lcore 30 as core 6 on socket 1 00:04:48.026 EAL: Detected lcore 31 as core 8 on socket 1 00:04:48.026 EAL: Detected lcore 32 as core 10 on socket 1 00:04:48.026 EAL: Detected lcore 33 as core 11 on socket 1 00:04:48.026 EAL: Detected lcore 34 as core 12 on socket 1 00:04:48.026 EAL: Detected lcore 35 as core 13 on socket 1 00:04:48.026 EAL: Detected lcore 36 as core 16 on socket 1 00:04:48.026 EAL: Detected lcore 37 as core 17 on socket 1 00:04:48.026 EAL: Detected lcore 38 as core 18 on socket 1 00:04:48.026 EAL: Detected lcore 39 as core 19 on socket 1 00:04:48.026 EAL: Detected lcore 40 as core 20 on socket 1 00:04:48.026 EAL: Detected lcore 41 as core 21 on socket 1 00:04:48.026 EAL: Detected lcore 42 as core 24 on socket 1 00:04:48.026 EAL: Detected lcore 43 as core 25 on socket 1 00:04:48.026 EAL: Detected lcore 44 as core 26 on socket 1 00:04:48.026 EAL: Detected lcore 45 as core 27 on socket 1 00:04:48.026 EAL: Detected lcore 46 as core 28 on socket 1 00:04:48.026 EAL: Detected lcore 47 as core 29 on socket 1 00:04:48.026 EAL: Detected lcore 48 as core 0 on socket 0 00:04:48.026 EAL: Detected lcore 49 as core 1 on socket 0 00:04:48.026 EAL: Detected lcore 50 as core 2 on socket 0 00:04:48.026 EAL: Detected lcore 51 as core 3 on socket 0 00:04:48.026 EAL: Detected lcore 52 as core 4 on socket 0 00:04:48.026 EAL: Detected lcore 53 as core 5 on socket 0 00:04:48.026 EAL: Detected lcore 54 as core 6 on socket 0 00:04:48.026 EAL: Detected lcore 55 as core 8 on socket 0 00:04:48.026 EAL: Detected lcore 56 as core 9 on socket 0 00:04:48.027 EAL: Detected lcore 57 as core 10 on socket 0 00:04:48.027 EAL: Detected lcore 58 as core 11 on socket 0 00:04:48.027 EAL: Detected lcore 59 as core 12 on socket 0 00:04:48.027 EAL: Detected lcore 60 as core 13 on socket 0 00:04:48.027 EAL: Detected lcore 61 as core 16 on socket 0 00:04:48.027 EAL: Detected lcore 62 as core 17 on socket 0 00:04:48.027 EAL: Detected lcore 63 as core 18 on socket 0 00:04:48.027 EAL: Detected lcore 64 as core 19 on socket 0 00:04:48.027 EAL: Detected lcore 65 as core 20 on socket 0 00:04:48.027 EAL: Detected lcore 66 as core 21 on socket 0 00:04:48.027 EAL: Detected lcore 67 as core 25 on socket 0 00:04:48.027 EAL: Detected lcore 68 as core 26 on socket 0 00:04:48.027 EAL: Detected lcore 69 as core 27 on socket 0 00:04:48.027 EAL: Detected lcore 70 as core 28 on socket 0 00:04:48.027 EAL: Detected lcore 71 as core 29 on socket 0 00:04:48.027 EAL: Detected lcore 72 as core 0 on socket 1 00:04:48.027 EAL: Detected lcore 73 as core 1 on socket 1 00:04:48.027 EAL: Detected lcore 74 as core 2 on socket 1 00:04:48.027 EAL: Detected lcore 75 as core 3 on socket 1 00:04:48.027 EAL: Detected lcore 76 as core 4 on socket 1 00:04:48.027 EAL: Detected lcore 77 as core 5 on socket 1 00:04:48.027 EAL: Detected lcore 78 as core 6 on socket 1 00:04:48.027 EAL: Detected lcore 79 as core 8 on socket 1 00:04:48.027 EAL: Detected lcore 80 as core 10 on socket 1 00:04:48.027 EAL: Detected lcore 81 as core 11 on socket 1 00:04:48.027 EAL: Detected lcore 82 as core 12 on socket 1 00:04:48.027 EAL: Detected lcore 83 as core 13 on socket 1 00:04:48.027 EAL: Detected lcore 84 as core 16 on socket 1 00:04:48.027 EAL: Detected lcore 85 as core 17 on socket 1 00:04:48.027 EAL: Detected lcore 86 as core 18 on socket 1 00:04:48.027 EAL: Detected lcore 87 as core 19 on socket 1 00:04:48.027 EAL: Detected lcore 88 as core 20 on socket 1 00:04:48.027 EAL: Detected lcore 89 as core 21 on socket 1 00:04:48.027 EAL: Detected lcore 90 as core 24 on socket 1 00:04:48.027 EAL: Detected lcore 91 as core 25 on socket 1 00:04:48.027 EAL: Detected lcore 92 as core 26 on socket 1 00:04:48.027 EAL: Detected lcore 93 as core 27 on socket 1 00:04:48.027 EAL: Detected lcore 94 as core 28 on socket 1 00:04:48.027 EAL: Detected lcore 95 as core 29 on socket 1 00:04:48.027 EAL: Maximum logical cores by configuration: 128 00:04:48.027 EAL: Detected CPU lcores: 96 00:04:48.027 EAL: Detected NUMA nodes: 2 00:04:48.027 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:48.027 EAL: Detected shared linkage of DPDK 00:04:48.027 EAL: No shared files mode enabled, IPC will be disabled 00:04:48.027 EAL: Bus pci wants IOVA as 'DC' 00:04:48.027 EAL: Buses did not request a specific IOVA mode. 00:04:48.027 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:48.027 EAL: Selected IOVA mode 'VA' 00:04:48.027 EAL: Probing VFIO support... 00:04:48.027 EAL: IOMMU type 1 (Type 1) is supported 00:04:48.027 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:48.027 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:48.027 EAL: VFIO support initialized 00:04:48.027 EAL: Ask a virtual area of 0x2e000 bytes 00:04:48.027 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:48.027 EAL: Setting up physically contiguous memory... 00:04:48.027 EAL: Setting maximum number of open files to 524288 00:04:48.027 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:48.027 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:48.027 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:48.027 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:48.027 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.027 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:48.027 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.027 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.027 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:48.027 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:48.027 EAL: Hugepages will be freed exactly as allocated. 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: TSC frequency is ~2100000 KHz 00:04:48.027 EAL: Main lcore 0 is ready (tid=7f21e7d4ba00;cpuset=[0]) 00:04:48.027 EAL: Trying to obtain current memory policy. 00:04:48.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.027 EAL: Restoring previous memory policy: 0 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was expanded by 2MB 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:48.027 EAL: Mem event callback 'spdk:(nil)' registered 00:04:48.027 00:04:48.027 00:04:48.027 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.027 http://cunit.sourceforge.net/ 00:04:48.027 00:04:48.027 00:04:48.027 Suite: components_suite 00:04:48.027 Test: vtophys_malloc_test ...passed 00:04:48.027 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:48.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.027 EAL: Restoring previous memory policy: 4 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was expanded by 4MB 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was shrunk by 4MB 00:04:48.027 EAL: Trying to obtain current memory policy. 00:04:48.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.027 EAL: Restoring previous memory policy: 4 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was expanded by 6MB 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was shrunk by 6MB 00:04:48.027 EAL: Trying to obtain current memory policy. 00:04:48.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.027 EAL: Restoring previous memory policy: 4 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was expanded by 10MB 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was shrunk by 10MB 00:04:48.027 EAL: Trying to obtain current memory policy. 00:04:48.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.027 EAL: Restoring previous memory policy: 4 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was expanded by 18MB 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was shrunk by 18MB 00:04:48.027 EAL: Trying to obtain current memory policy. 00:04:48.027 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.027 EAL: Restoring previous memory policy: 4 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.027 EAL: request: mp_malloc_sync 00:04:48.027 EAL: No shared files mode enabled, IPC is disabled 00:04:48.027 EAL: Heap on socket 0 was expanded by 34MB 00:04:48.027 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.028 EAL: request: mp_malloc_sync 00:04:48.028 EAL: No shared files mode enabled, IPC is disabled 00:04:48.028 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.028 EAL: Trying to obtain current memory policy. 00:04:48.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.028 EAL: Restoring previous memory policy: 4 00:04:48.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.028 EAL: request: mp_malloc_sync 00:04:48.028 EAL: No shared files mode enabled, IPC is disabled 00:04:48.028 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.028 EAL: request: mp_malloc_sync 00:04:48.028 EAL: No shared files mode enabled, IPC is disabled 00:04:48.028 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.028 EAL: Trying to obtain current memory policy. 00:04:48.028 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.028 EAL: Restoring previous memory policy: 4 00:04:48.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.028 EAL: request: mp_malloc_sync 00:04:48.028 EAL: No shared files mode enabled, IPC is disabled 00:04:48.028 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.028 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.287 EAL: request: mp_malloc_sync 00:04:48.287 EAL: No shared files mode enabled, IPC is disabled 00:04:48.287 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.287 EAL: Trying to obtain current memory policy. 00:04:48.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.287 EAL: Restoring previous memory policy: 4 00:04:48.287 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.287 EAL: request: mp_malloc_sync 00:04:48.287 EAL: No shared files mode enabled, IPC is disabled 00:04:48.287 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.287 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.287 EAL: request: mp_malloc_sync 00:04:48.287 EAL: No shared files mode enabled, IPC is disabled 00:04:48.287 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.287 EAL: Trying to obtain current memory policy. 00:04:48.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.287 EAL: Restoring previous memory policy: 4 00:04:48.287 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.287 EAL: request: mp_malloc_sync 00:04:48.287 EAL: No shared files mode enabled, IPC is disabled 00:04:48.287 EAL: Heap on socket 0 was expanded by 514MB 00:04:48.546 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.546 EAL: request: mp_malloc_sync 00:04:48.546 EAL: No shared files mode enabled, IPC is disabled 00:04:48.546 EAL: Heap on socket 0 was shrunk by 514MB 00:04:48.546 EAL: Trying to obtain current memory policy. 00:04:48.546 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.805 EAL: Restoring previous memory policy: 4 00:04:48.805 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.805 EAL: request: mp_malloc_sync 00:04:48.805 EAL: No shared files mode enabled, IPC is disabled 00:04:48.805 EAL: Heap on socket 0 was expanded by 1026MB 00:04:48.805 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.065 EAL: request: mp_malloc_sync 00:04:49.065 EAL: No shared files mode enabled, IPC is disabled 00:04:49.065 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.065 passed 00:04:49.065 00:04:49.065 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.065 suites 1 1 n/a 0 0 00:04:49.065 tests 2 2 2 0 0 00:04:49.065 asserts 497 497 497 0 n/a 00:04:49.065 00:04:49.065 Elapsed time = 0.969 seconds 00:04:49.065 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.065 EAL: request: mp_malloc_sync 00:04:49.065 EAL: No shared files mode enabled, IPC is disabled 00:04:49.065 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.065 EAL: No shared files mode enabled, IPC is disabled 00:04:49.065 EAL: No shared files mode enabled, IPC is disabled 00:04:49.065 EAL: No shared files mode enabled, IPC is disabled 00:04:49.065 00:04:49.065 real 0m1.100s 00:04:49.065 user 0m0.644s 00:04:49.065 sys 0m0.427s 00:04:49.065 05:26:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.065 05:26:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:49.065 ************************************ 00:04:49.065 END TEST env_vtophys 00:04:49.065 ************************************ 00:04:49.065 05:26:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.065 05:26:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.065 05:26:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.065 05:26:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.065 ************************************ 00:04:49.065 START TEST env_pci 00:04:49.065 ************************************ 00:04:49.065 05:26:36 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.065 00:04:49.065 00:04:49.065 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.065 http://cunit.sourceforge.net/ 00:04:49.065 00:04:49.065 00:04:49.065 Suite: pci 00:04:49.065 Test: pci_hook ...[2024-11-27 05:26:37.011538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1565192 has claimed it 00:04:49.065 EAL: Cannot find device (10000:00:01.0) 00:04:49.065 EAL: Failed to attach device on primary process 00:04:49.065 passed 00:04:49.065 00:04:49.065 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.065 suites 1 1 n/a 0 0 00:04:49.065 tests 1 1 1 0 0 00:04:49.065 asserts 25 25 25 0 n/a 00:04:49.065 00:04:49.065 Elapsed time = 0.027 seconds 00:04:49.065 00:04:49.065 real 0m0.046s 00:04:49.065 user 0m0.014s 00:04:49.065 sys 0m0.032s 00:04:49.065 05:26:37 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.065 05:26:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:49.065 ************************************ 00:04:49.065 END TEST env_pci 00:04:49.065 ************************************ 00:04:49.323 05:26:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.323 05:26:37 env -- env/env.sh@15 -- # uname 00:04:49.323 05:26:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.323 05:26:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.323 05:26:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.323 05:26:37 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:49.323 05:26:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.323 05:26:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.323 ************************************ 00:04:49.323 START TEST env_dpdk_post_init 00:04:49.323 ************************************ 00:04:49.323 05:26:37 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.323 EAL: Detected CPU lcores: 96 00:04:49.323 EAL: Detected NUMA nodes: 2 00:04:49.323 EAL: Detected shared linkage of DPDK 00:04:49.323 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.323 EAL: Selected IOVA mode 'VA' 00:04:49.323 EAL: VFIO support initialized 00:04:49.323 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.323 EAL: Using IOMMU type 1 (Type 1) 00:04:49.323 EAL: Ignore mapping IO port bar(1) 00:04:49.323 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:49.324 EAL: Ignore mapping IO port bar(1) 00:04:49.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:49.324 EAL: Ignore mapping IO port bar(1) 00:04:49.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:49.324 EAL: Ignore mapping IO port bar(1) 00:04:49.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:49.324 EAL: Ignore mapping IO port bar(1) 00:04:49.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:49.324 EAL: Ignore mapping IO port bar(1) 00:04:49.324 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:49.583 EAL: Ignore mapping IO port bar(1) 00:04:49.583 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:49.583 EAL: Ignore mapping IO port bar(1) 00:04:49.583 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:50.151 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:50.151 EAL: Ignore mapping IO port bar(1) 00:04:50.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:50.151 EAL: Ignore mapping IO port bar(1) 00:04:50.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:50.151 EAL: Ignore mapping IO port bar(1) 00:04:50.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:50.151 EAL: Ignore mapping IO port bar(1) 00:04:50.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:50.151 EAL: Ignore mapping IO port bar(1) 00:04:50.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:50.151 EAL: Ignore mapping IO port bar(1) 00:04:50.151 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:50.410 EAL: Ignore mapping IO port bar(1) 00:04:50.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:50.410 EAL: Ignore mapping IO port bar(1) 00:04:50.410 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:53.698 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:53.698 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:54.267 Starting DPDK initialization... 00:04:54.267 Starting SPDK post initialization... 00:04:54.267 SPDK NVMe probe 00:04:54.267 Attaching to 0000:5e:00.0 00:04:54.267 Attached to 0000:5e:00.0 00:04:54.267 Cleaning up... 00:04:54.267 00:04:54.267 real 0m4.849s 00:04:54.267 user 0m3.419s 00:04:54.267 sys 0m0.501s 00:04:54.268 05:26:41 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.268 05:26:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.268 ************************************ 00:04:54.268 END TEST env_dpdk_post_init 00:04:54.268 ************************************ 00:04:54.268 05:26:42 env -- env/env.sh@26 -- # uname 00:04:54.268 05:26:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:54.268 05:26:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.268 05:26:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.268 05:26:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.268 05:26:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.268 ************************************ 00:04:54.268 START TEST env_mem_callbacks 00:04:54.268 ************************************ 00:04:54.268 05:26:42 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.268 EAL: Detected CPU lcores: 96 00:04:54.268 EAL: Detected NUMA nodes: 2 00:04:54.268 EAL: Detected shared linkage of DPDK 00:04:54.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.268 EAL: Selected IOVA mode 'VA' 00:04:54.268 EAL: VFIO support initialized 00:04:54.268 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.268 00:04:54.268 00:04:54.268 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.268 http://cunit.sourceforge.net/ 00:04:54.268 00:04:54.268 00:04:54.268 Suite: memory 00:04:54.268 Test: test ... 00:04:54.268 register 0x200000200000 2097152 00:04:54.268 malloc 3145728 00:04:54.268 register 0x200000400000 4194304 00:04:54.268 buf 0x200000500000 len 3145728 PASSED 00:04:54.268 malloc 64 00:04:54.268 buf 0x2000004fff40 len 64 PASSED 00:04:54.268 malloc 4194304 00:04:54.268 register 0x200000800000 6291456 00:04:54.268 buf 0x200000a00000 len 4194304 PASSED 00:04:54.268 free 0x200000500000 3145728 00:04:54.268 free 0x2000004fff40 64 00:04:54.268 unregister 0x200000400000 4194304 PASSED 00:04:54.268 free 0x200000a00000 4194304 00:04:54.268 unregister 0x200000800000 6291456 PASSED 00:04:54.268 malloc 8388608 00:04:54.268 register 0x200000400000 10485760 00:04:54.268 buf 0x200000600000 len 8388608 PASSED 00:04:54.268 free 0x200000600000 8388608 00:04:54.268 unregister 0x200000400000 10485760 PASSED 00:04:54.268 passed 00:04:54.268 00:04:54.268 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.268 suites 1 1 n/a 0 0 00:04:54.268 tests 1 1 1 0 0 00:04:54.268 asserts 15 15 15 0 n/a 00:04:54.268 00:04:54.268 Elapsed time = 0.008 seconds 00:04:54.268 00:04:54.268 real 0m0.059s 00:04:54.268 user 0m0.017s 00:04:54.268 sys 0m0.042s 00:04:54.268 05:26:42 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.268 05:26:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:54.268 ************************************ 00:04:54.268 END TEST env_mem_callbacks 00:04:54.268 ************************************ 00:04:54.268 00:04:54.268 real 0m6.730s 00:04:54.268 user 0m4.474s 00:04:54.268 sys 0m1.336s 00:04:54.268 05:26:42 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.268 05:26:42 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.268 ************************************ 00:04:54.268 END TEST env 00:04:54.268 ************************************ 00:04:54.268 05:26:42 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.268 05:26:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.268 05:26:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.268 05:26:42 -- common/autotest_common.sh@10 -- # set +x 00:04:54.268 ************************************ 00:04:54.268 START TEST rpc 00:04:54.268 ************************************ 00:04:54.268 05:26:42 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.527 * Looking for test storage... 00:04:54.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.528 05:26:42 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.528 05:26:42 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.528 05:26:42 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.528 05:26:42 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.528 05:26:42 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.528 05:26:42 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.528 05:26:42 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.528 05:26:42 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:54.528 05:26:42 rpc -- scripts/common.sh@345 -- # : 1 00:04:54.528 05:26:42 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.528 05:26:42 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.528 05:26:42 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.528 05:26:42 rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.528 05:26:42 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.528 05:26:42 rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.528 05:26:42 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.528 05:26:42 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.528 05:26:42 rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.528 05:26:42 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.528 05:26:42 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.528 05:26:42 rpc -- scripts/common.sh@368 -- # return 0 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.528 --rc genhtml_branch_coverage=1 00:04:54.528 --rc genhtml_function_coverage=1 00:04:54.528 --rc genhtml_legend=1 00:04:54.528 --rc geninfo_all_blocks=1 00:04:54.528 --rc geninfo_unexecuted_blocks=1 00:04:54.528 00:04:54.528 ' 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.528 --rc genhtml_branch_coverage=1 00:04:54.528 --rc genhtml_function_coverage=1 00:04:54.528 --rc genhtml_legend=1 00:04:54.528 --rc geninfo_all_blocks=1 00:04:54.528 --rc geninfo_unexecuted_blocks=1 00:04:54.528 00:04:54.528 ' 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.528 --rc genhtml_branch_coverage=1 00:04:54.528 --rc genhtml_function_coverage=1 00:04:54.528 --rc genhtml_legend=1 00:04:54.528 --rc geninfo_all_blocks=1 00:04:54.528 --rc geninfo_unexecuted_blocks=1 00:04:54.528 00:04:54.528 ' 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.528 --rc genhtml_branch_coverage=1 00:04:54.528 --rc genhtml_function_coverage=1 00:04:54.528 --rc genhtml_legend=1 00:04:54.528 --rc geninfo_all_blocks=1 00:04:54.528 --rc geninfo_unexecuted_blocks=1 00:04:54.528 00:04:54.528 ' 00:04:54.528 05:26:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1566241 00:04:54.528 05:26:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.528 05:26:42 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:54.528 05:26:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1566241 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@835 -- # '[' -z 1566241 ']' 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.528 05:26:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.528 [2024-11-27 05:26:42.440335] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:04:54.528 [2024-11-27 05:26:42.440380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566241 ] 00:04:54.528 [2024-11-27 05:26:42.516102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.787 [2024-11-27 05:26:42.557720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:54.787 [2024-11-27 05:26:42.557759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1566241' to capture a snapshot of events at runtime. 00:04:54.787 [2024-11-27 05:26:42.557769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:54.787 [2024-11-27 05:26:42.557775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:54.787 [2024-11-27 05:26:42.557779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1566241 for offline analysis/debug. 00:04:54.787 [2024-11-27 05:26:42.558345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.355 05:26:43 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.355 05:26:43 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.355 05:26:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.355 05:26:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.355 05:26:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:55.355 05:26:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:55.355 05:26:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.355 05:26:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.355 05:26:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.355 ************************************ 00:04:55.355 START TEST rpc_integrity 00:04:55.355 ************************************ 00:04:55.355 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:55.355 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.355 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.355 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.355 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.355 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.355 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:55.355 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.355 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.355 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.355 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.616 { 00:04:55.616 "name": "Malloc0", 00:04:55.616 "aliases": [ 00:04:55.616 "7ef15d1b-bafc-425f-886f-9f32322ba585" 00:04:55.616 ], 00:04:55.616 "product_name": "Malloc disk", 00:04:55.616 "block_size": 512, 00:04:55.616 "num_blocks": 16384, 00:04:55.616 "uuid": "7ef15d1b-bafc-425f-886f-9f32322ba585", 00:04:55.616 "assigned_rate_limits": { 00:04:55.616 "rw_ios_per_sec": 0, 00:04:55.616 "rw_mbytes_per_sec": 0, 00:04:55.616 "r_mbytes_per_sec": 0, 00:04:55.616 "w_mbytes_per_sec": 0 00:04:55.616 }, 00:04:55.616 "claimed": false, 00:04:55.616 "zoned": false, 00:04:55.616 "supported_io_types": { 00:04:55.616 "read": true, 00:04:55.616 "write": true, 00:04:55.616 "unmap": true, 00:04:55.616 "flush": true, 00:04:55.616 "reset": true, 00:04:55.616 "nvme_admin": false, 00:04:55.616 "nvme_io": false, 00:04:55.616 "nvme_io_md": false, 00:04:55.616 "write_zeroes": true, 00:04:55.616 "zcopy": true, 00:04:55.616 "get_zone_info": false, 00:04:55.616 "zone_management": false, 00:04:55.616 "zone_append": false, 00:04:55.616 "compare": false, 00:04:55.616 "compare_and_write": false, 00:04:55.616 "abort": true, 00:04:55.616 "seek_hole": false, 00:04:55.616 "seek_data": false, 00:04:55.616 "copy": true, 00:04:55.616 "nvme_iov_md": false 00:04:55.616 }, 00:04:55.616 "memory_domains": [ 00:04:55.616 { 00:04:55.616 "dma_device_id": "system", 00:04:55.616 "dma_device_type": 1 00:04:55.616 }, 00:04:55.616 { 00:04:55.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.616 "dma_device_type": 2 00:04:55.616 } 00:04:55.616 ], 00:04:55.616 "driver_specific": {} 00:04:55.616 } 00:04:55.616 ]' 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.616 [2024-11-27 05:26:43.432339] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:55.616 [2024-11-27 05:26:43.432365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.616 [2024-11-27 05:26:43.432378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20c6280 00:04:55.616 [2024-11-27 05:26:43.432384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.616 [2024-11-27 05:26:43.433456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.616 [2024-11-27 05:26:43.433477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.616 Passthru0 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.616 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.616 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.616 { 00:04:55.616 "name": "Malloc0", 00:04:55.616 "aliases": [ 00:04:55.616 "7ef15d1b-bafc-425f-886f-9f32322ba585" 00:04:55.616 ], 00:04:55.616 "product_name": "Malloc disk", 00:04:55.616 "block_size": 512, 00:04:55.616 "num_blocks": 16384, 00:04:55.616 "uuid": "7ef15d1b-bafc-425f-886f-9f32322ba585", 00:04:55.616 "assigned_rate_limits": { 00:04:55.616 "rw_ios_per_sec": 0, 00:04:55.616 "rw_mbytes_per_sec": 0, 00:04:55.616 "r_mbytes_per_sec": 0, 00:04:55.616 "w_mbytes_per_sec": 0 00:04:55.616 }, 00:04:55.616 "claimed": true, 00:04:55.616 "claim_type": "exclusive_write", 00:04:55.616 "zoned": false, 00:04:55.616 "supported_io_types": { 00:04:55.616 "read": true, 00:04:55.616 "write": true, 00:04:55.616 "unmap": true, 00:04:55.616 "flush": true, 00:04:55.616 "reset": true, 00:04:55.616 "nvme_admin": false, 00:04:55.616 "nvme_io": false, 00:04:55.616 "nvme_io_md": false, 00:04:55.616 "write_zeroes": true, 00:04:55.616 "zcopy": true, 00:04:55.616 "get_zone_info": false, 00:04:55.616 "zone_management": false, 00:04:55.616 "zone_append": false, 00:04:55.616 "compare": false, 00:04:55.616 "compare_and_write": false, 00:04:55.616 "abort": true, 00:04:55.616 "seek_hole": false, 00:04:55.616 "seek_data": false, 00:04:55.616 "copy": true, 00:04:55.616 "nvme_iov_md": false 00:04:55.616 }, 00:04:55.616 "memory_domains": [ 00:04:55.616 { 00:04:55.616 "dma_device_id": "system", 00:04:55.616 "dma_device_type": 1 00:04:55.616 }, 00:04:55.616 { 00:04:55.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.616 "dma_device_type": 2 00:04:55.616 } 00:04:55.616 ], 00:04:55.616 "driver_specific": {} 00:04:55.616 }, 00:04:55.616 { 00:04:55.616 "name": "Passthru0", 00:04:55.616 "aliases": [ 00:04:55.616 "de2bfdb1-970b-5cbd-b2e1-4c141c2ff523" 00:04:55.616 ], 00:04:55.616 "product_name": "passthru", 00:04:55.616 "block_size": 512, 00:04:55.616 "num_blocks": 16384, 00:04:55.616 "uuid": "de2bfdb1-970b-5cbd-b2e1-4c141c2ff523", 00:04:55.616 "assigned_rate_limits": { 00:04:55.616 "rw_ios_per_sec": 0, 00:04:55.616 "rw_mbytes_per_sec": 0, 00:04:55.616 "r_mbytes_per_sec": 0, 00:04:55.616 "w_mbytes_per_sec": 0 00:04:55.616 }, 00:04:55.616 "claimed": false, 00:04:55.616 "zoned": false, 00:04:55.616 "supported_io_types": { 00:04:55.616 "read": true, 00:04:55.616 "write": true, 00:04:55.616 "unmap": true, 00:04:55.616 "flush": true, 00:04:55.616 "reset": true, 00:04:55.616 "nvme_admin": false, 00:04:55.616 "nvme_io": false, 00:04:55.616 "nvme_io_md": false, 00:04:55.616 "write_zeroes": true, 00:04:55.616 "zcopy": true, 00:04:55.616 "get_zone_info": false, 00:04:55.616 "zone_management": false, 00:04:55.616 "zone_append": false, 00:04:55.616 "compare": false, 00:04:55.616 "compare_and_write": false, 00:04:55.616 "abort": true, 00:04:55.616 "seek_hole": false, 00:04:55.616 "seek_data": false, 00:04:55.617 "copy": true, 00:04:55.617 "nvme_iov_md": false 00:04:55.617 }, 00:04:55.617 "memory_domains": [ 00:04:55.617 { 00:04:55.617 "dma_device_id": "system", 00:04:55.617 "dma_device_type": 1 00:04:55.617 }, 00:04:55.617 { 00:04:55.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.617 "dma_device_type": 2 00:04:55.617 } 00:04:55.617 ], 00:04:55.617 "driver_specific": { 00:04:55.617 "passthru": { 00:04:55.617 "name": "Passthru0", 00:04:55.617 "base_bdev_name": "Malloc0" 00:04:55.617 } 00:04:55.617 } 00:04:55.617 } 00:04:55.617 ]' 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:55.617 05:26:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.617 00:04:55.617 real 0m0.279s 00:04:55.617 user 0m0.177s 00:04:55.617 sys 0m0.037s 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.617 05:26:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.617 ************************************ 00:04:55.617 END TEST rpc_integrity 00:04:55.617 ************************************ 00:04:55.617 05:26:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:55.617 05:26:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.617 05:26:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.617 05:26:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.876 ************************************ 00:04:55.876 START TEST rpc_plugins 00:04:55.876 ************************************ 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:55.876 { 00:04:55.876 "name": "Malloc1", 00:04:55.876 "aliases": [ 00:04:55.876 "59dbbcd9-1c66-463e-9972-537f396b11db" 00:04:55.876 ], 00:04:55.876 "product_name": "Malloc disk", 00:04:55.876 "block_size": 4096, 00:04:55.876 "num_blocks": 256, 00:04:55.876 "uuid": "59dbbcd9-1c66-463e-9972-537f396b11db", 00:04:55.876 "assigned_rate_limits": { 00:04:55.876 "rw_ios_per_sec": 0, 00:04:55.876 "rw_mbytes_per_sec": 0, 00:04:55.876 "r_mbytes_per_sec": 0, 00:04:55.876 "w_mbytes_per_sec": 0 00:04:55.876 }, 00:04:55.876 "claimed": false, 00:04:55.876 "zoned": false, 00:04:55.876 "supported_io_types": { 00:04:55.876 "read": true, 00:04:55.876 "write": true, 00:04:55.876 "unmap": true, 00:04:55.876 "flush": true, 00:04:55.876 "reset": true, 00:04:55.876 "nvme_admin": false, 00:04:55.876 "nvme_io": false, 00:04:55.876 "nvme_io_md": false, 00:04:55.876 "write_zeroes": true, 00:04:55.876 "zcopy": true, 00:04:55.876 "get_zone_info": false, 00:04:55.876 "zone_management": false, 00:04:55.876 "zone_append": false, 00:04:55.876 "compare": false, 00:04:55.876 "compare_and_write": false, 00:04:55.876 "abort": true, 00:04:55.876 "seek_hole": false, 00:04:55.876 "seek_data": false, 00:04:55.876 "copy": true, 00:04:55.876 "nvme_iov_md": false 00:04:55.876 }, 00:04:55.876 "memory_domains": [ 00:04:55.876 { 00:04:55.876 "dma_device_id": "system", 00:04:55.876 "dma_device_type": 1 00:04:55.876 }, 00:04:55.876 { 00:04:55.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.876 "dma_device_type": 2 00:04:55.876 } 00:04:55.876 ], 00:04:55.876 "driver_specific": {} 00:04:55.876 } 00:04:55.876 ]' 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.876 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.876 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:55.877 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.877 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.877 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.877 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:55.877 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:55.877 05:26:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:55.877 00:04:55.877 real 0m0.144s 00:04:55.877 user 0m0.085s 00:04:55.877 sys 0m0.021s 00:04:55.877 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.877 05:26:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:55.877 ************************************ 00:04:55.877 END TEST rpc_plugins 00:04:55.877 ************************************ 00:04:55.877 05:26:43 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:55.877 05:26:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.877 05:26:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.877 05:26:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.877 ************************************ 00:04:55.877 START TEST rpc_trace_cmd_test 00:04:55.877 ************************************ 00:04:55.877 05:26:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:55.877 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:55.877 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:55.877 05:26:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.877 05:26:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.143 05:26:43 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.143 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:56.143 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1566241", 00:04:56.143 "tpoint_group_mask": "0x8", 00:04:56.143 "iscsi_conn": { 00:04:56.143 "mask": "0x2", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "scsi": { 00:04:56.143 "mask": "0x4", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "bdev": { 00:04:56.143 "mask": "0x8", 00:04:56.143 "tpoint_mask": "0xffffffffffffffff" 00:04:56.143 }, 00:04:56.143 "nvmf_rdma": { 00:04:56.143 "mask": "0x10", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "nvmf_tcp": { 00:04:56.143 "mask": "0x20", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "ftl": { 00:04:56.143 "mask": "0x40", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "blobfs": { 00:04:56.143 "mask": "0x80", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "dsa": { 00:04:56.143 "mask": "0x200", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "thread": { 00:04:56.143 "mask": "0x400", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "nvme_pcie": { 00:04:56.143 "mask": "0x800", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "iaa": { 00:04:56.143 "mask": "0x1000", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "nvme_tcp": { 00:04:56.143 "mask": "0x2000", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "bdev_nvme": { 00:04:56.143 "mask": "0x4000", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "sock": { 00:04:56.143 "mask": "0x8000", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "blob": { 00:04:56.143 "mask": "0x10000", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "bdev_raid": { 00:04:56.143 "mask": "0x20000", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 }, 00:04:56.143 "scheduler": { 00:04:56.143 "mask": "0x40000", 00:04:56.143 "tpoint_mask": "0x0" 00:04:56.143 } 00:04:56.143 }' 00:04:56.143 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:56.143 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:56.143 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.143 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.143 05:26:43 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.143 05:26:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.143 05:26:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.143 05:26:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.143 05:26:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.143 05:26:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.143 00:04:56.143 real 0m0.222s 00:04:56.143 user 0m0.182s 00:04:56.143 sys 0m0.030s 00:04:56.143 05:26:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.143 05:26:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.143 ************************************ 00:04:56.143 END TEST rpc_trace_cmd_test 00:04:56.143 ************************************ 00:04:56.143 05:26:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:56.143 05:26:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:56.143 05:26:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:56.143 05:26:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.143 05:26:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.143 05:26:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.512 ************************************ 00:04:56.512 START TEST rpc_daemon_integrity 00:04:56.512 ************************************ 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.512 { 00:04:56.512 "name": "Malloc2", 00:04:56.512 "aliases": [ 00:04:56.512 "4b757a47-5f74-4cd9-8c98-f82227a9cab3" 00:04:56.512 ], 00:04:56.512 "product_name": "Malloc disk", 00:04:56.512 "block_size": 512, 00:04:56.512 "num_blocks": 16384, 00:04:56.512 "uuid": "4b757a47-5f74-4cd9-8c98-f82227a9cab3", 00:04:56.512 "assigned_rate_limits": { 00:04:56.512 "rw_ios_per_sec": 0, 00:04:56.512 "rw_mbytes_per_sec": 0, 00:04:56.512 "r_mbytes_per_sec": 0, 00:04:56.512 "w_mbytes_per_sec": 0 00:04:56.512 }, 00:04:56.512 "claimed": false, 00:04:56.512 "zoned": false, 00:04:56.512 "supported_io_types": { 00:04:56.512 "read": true, 00:04:56.512 "write": true, 00:04:56.512 "unmap": true, 00:04:56.512 "flush": true, 00:04:56.512 "reset": true, 00:04:56.512 "nvme_admin": false, 00:04:56.512 "nvme_io": false, 00:04:56.512 "nvme_io_md": false, 00:04:56.512 "write_zeroes": true, 00:04:56.512 "zcopy": true, 00:04:56.512 "get_zone_info": false, 00:04:56.512 "zone_management": false, 00:04:56.512 "zone_append": false, 00:04:56.512 "compare": false, 00:04:56.512 "compare_and_write": false, 00:04:56.512 "abort": true, 00:04:56.512 "seek_hole": false, 00:04:56.512 "seek_data": false, 00:04:56.512 "copy": true, 00:04:56.512 "nvme_iov_md": false 00:04:56.512 }, 00:04:56.512 "memory_domains": [ 00:04:56.512 { 00:04:56.512 "dma_device_id": "system", 00:04:56.512 "dma_device_type": 1 00:04:56.512 }, 00:04:56.512 { 00:04:56.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.512 "dma_device_type": 2 00:04:56.512 } 00:04:56.512 ], 00:04:56.512 "driver_specific": {} 00:04:56.512 } 00:04:56.512 ]' 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.512 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.512 [2024-11-27 05:26:44.282641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:56.512 [2024-11-27 05:26:44.282668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.513 [2024-11-27 05:26:44.282685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20c8150 00:04:56.513 [2024-11-27 05:26:44.282690] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.513 [2024-11-27 05:26:44.283659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.513 [2024-11-27 05:26:44.283689] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.513 Passthru0 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.513 { 00:04:56.513 "name": "Malloc2", 00:04:56.513 "aliases": [ 00:04:56.513 "4b757a47-5f74-4cd9-8c98-f82227a9cab3" 00:04:56.513 ], 00:04:56.513 "product_name": "Malloc disk", 00:04:56.513 "block_size": 512, 00:04:56.513 "num_blocks": 16384, 00:04:56.513 "uuid": "4b757a47-5f74-4cd9-8c98-f82227a9cab3", 00:04:56.513 "assigned_rate_limits": { 00:04:56.513 "rw_ios_per_sec": 0, 00:04:56.513 "rw_mbytes_per_sec": 0, 00:04:56.513 "r_mbytes_per_sec": 0, 00:04:56.513 "w_mbytes_per_sec": 0 00:04:56.513 }, 00:04:56.513 "claimed": true, 00:04:56.513 "claim_type": "exclusive_write", 00:04:56.513 "zoned": false, 00:04:56.513 "supported_io_types": { 00:04:56.513 "read": true, 00:04:56.513 "write": true, 00:04:56.513 "unmap": true, 00:04:56.513 "flush": true, 00:04:56.513 "reset": true, 00:04:56.513 "nvme_admin": false, 00:04:56.513 "nvme_io": false, 00:04:56.513 "nvme_io_md": false, 00:04:56.513 "write_zeroes": true, 00:04:56.513 "zcopy": true, 00:04:56.513 "get_zone_info": false, 00:04:56.513 "zone_management": false, 00:04:56.513 "zone_append": false, 00:04:56.513 "compare": false, 00:04:56.513 "compare_and_write": false, 00:04:56.513 "abort": true, 00:04:56.513 "seek_hole": false, 00:04:56.513 "seek_data": false, 00:04:56.513 "copy": true, 00:04:56.513 "nvme_iov_md": false 00:04:56.513 }, 00:04:56.513 "memory_domains": [ 00:04:56.513 { 00:04:56.513 "dma_device_id": "system", 00:04:56.513 "dma_device_type": 1 00:04:56.513 }, 00:04:56.513 { 00:04:56.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.513 "dma_device_type": 2 00:04:56.513 } 00:04:56.513 ], 00:04:56.513 "driver_specific": {} 00:04:56.513 }, 00:04:56.513 { 00:04:56.513 "name": "Passthru0", 00:04:56.513 "aliases": [ 00:04:56.513 "cac27685-d6a7-5357-a33c-03ed75122375" 00:04:56.513 ], 00:04:56.513 "product_name": "passthru", 00:04:56.513 "block_size": 512, 00:04:56.513 "num_blocks": 16384, 00:04:56.513 "uuid": "cac27685-d6a7-5357-a33c-03ed75122375", 00:04:56.513 "assigned_rate_limits": { 00:04:56.513 "rw_ios_per_sec": 0, 00:04:56.513 "rw_mbytes_per_sec": 0, 00:04:56.513 "r_mbytes_per_sec": 0, 00:04:56.513 "w_mbytes_per_sec": 0 00:04:56.513 }, 00:04:56.513 "claimed": false, 00:04:56.513 "zoned": false, 00:04:56.513 "supported_io_types": { 00:04:56.513 "read": true, 00:04:56.513 "write": true, 00:04:56.513 "unmap": true, 00:04:56.513 "flush": true, 00:04:56.513 "reset": true, 00:04:56.513 "nvme_admin": false, 00:04:56.513 "nvme_io": false, 00:04:56.513 "nvme_io_md": false, 00:04:56.513 "write_zeroes": true, 00:04:56.513 "zcopy": true, 00:04:56.513 "get_zone_info": false, 00:04:56.513 "zone_management": false, 00:04:56.513 "zone_append": false, 00:04:56.513 "compare": false, 00:04:56.513 "compare_and_write": false, 00:04:56.513 "abort": true, 00:04:56.513 "seek_hole": false, 00:04:56.513 "seek_data": false, 00:04:56.513 "copy": true, 00:04:56.513 "nvme_iov_md": false 00:04:56.513 }, 00:04:56.513 "memory_domains": [ 00:04:56.513 { 00:04:56.513 "dma_device_id": "system", 00:04:56.513 "dma_device_type": 1 00:04:56.513 }, 00:04:56.513 { 00:04:56.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.513 "dma_device_type": 2 00:04:56.513 } 00:04:56.513 ], 00:04:56.513 "driver_specific": { 00:04:56.513 "passthru": { 00:04:56.513 "name": "Passthru0", 00:04:56.513 "base_bdev_name": "Malloc2" 00:04:56.513 } 00:04:56.513 } 00:04:56.513 } 00:04:56.513 ]' 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.513 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.514 00:04:56.514 real 0m0.265s 00:04:56.514 user 0m0.163s 00:04:56.514 sys 0m0.039s 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.514 05:26:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.514 ************************************ 00:04:56.514 END TEST rpc_daemon_integrity 00:04:56.514 ************************************ 00:04:56.514 05:26:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:56.514 05:26:44 rpc -- rpc/rpc.sh@84 -- # killprocess 1566241 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@954 -- # '[' -z 1566241 ']' 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@958 -- # kill -0 1566241 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566241 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566241' 00:04:56.514 killing process with pid 1566241 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@973 -- # kill 1566241 00:04:56.514 05:26:44 rpc -- common/autotest_common.sh@978 -- # wait 1566241 00:04:57.110 00:04:57.110 real 0m2.591s 00:04:57.110 user 0m3.299s 00:04:57.110 sys 0m0.734s 00:04:57.110 05:26:44 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.110 05:26:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.110 ************************************ 00:04:57.110 END TEST rpc 00:04:57.110 ************************************ 00:04:57.110 05:26:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.110 05:26:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.110 05:26:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.110 05:26:44 -- common/autotest_common.sh@10 -- # set +x 00:04:57.110 ************************************ 00:04:57.110 START TEST skip_rpc 00:04:57.110 ************************************ 00:04:57.110 05:26:44 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.110 * Looking for test storage... 00:04:57.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.110 05:26:44 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.110 05:26:44 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.110 05:26:44 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.110 05:26:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.110 --rc genhtml_branch_coverage=1 00:04:57.110 --rc genhtml_function_coverage=1 00:04:57.110 --rc genhtml_legend=1 00:04:57.110 --rc geninfo_all_blocks=1 00:04:57.110 --rc geninfo_unexecuted_blocks=1 00:04:57.110 00:04:57.110 ' 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.110 --rc genhtml_branch_coverage=1 00:04:57.110 --rc genhtml_function_coverage=1 00:04:57.110 --rc genhtml_legend=1 00:04:57.110 --rc geninfo_all_blocks=1 00:04:57.110 --rc geninfo_unexecuted_blocks=1 00:04:57.110 00:04:57.110 ' 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.110 --rc genhtml_branch_coverage=1 00:04:57.110 --rc genhtml_function_coverage=1 00:04:57.110 --rc genhtml_legend=1 00:04:57.110 --rc geninfo_all_blocks=1 00:04:57.110 --rc geninfo_unexecuted_blocks=1 00:04:57.110 00:04:57.110 ' 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.110 --rc genhtml_branch_coverage=1 00:04:57.110 --rc genhtml_function_coverage=1 00:04:57.110 --rc genhtml_legend=1 00:04:57.110 --rc geninfo_all_blocks=1 00:04:57.110 --rc geninfo_unexecuted_blocks=1 00:04:57.110 00:04:57.110 ' 00:04:57.110 05:26:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.110 05:26:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.110 05:26:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.110 05:26:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.110 ************************************ 00:04:57.110 START TEST skip_rpc 00:04:57.110 ************************************ 00:04:57.110 05:26:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:57.110 05:26:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1566894 00:04:57.110 05:26:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.110 05:26:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:57.110 05:26:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:57.384 [2024-11-27 05:26:45.135677] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:04:57.384 [2024-11-27 05:26:45.135713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566894 ] 00:04:57.384 [2024-11-27 05:26:45.208248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.384 [2024-11-27 05:26:45.248257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1566894 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1566894 ']' 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1566894 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566894 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566894' 00:05:02.810 killing process with pid 1566894 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1566894 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1566894 00:05:02.810 00:05:02.810 real 0m5.368s 00:05:02.810 user 0m5.125s 00:05:02.810 sys 0m0.279s 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.810 05:26:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 ************************************ 00:05:02.810 END TEST skip_rpc 00:05:02.810 ************************************ 00:05:02.810 05:26:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:02.810 05:26:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.810 05:26:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.810 05:26:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 ************************************ 00:05:02.810 START TEST skip_rpc_with_json 00:05:02.810 ************************************ 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1567845 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1567845 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1567845 ']' 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.810 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.810 [2024-11-27 05:26:50.571642] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:02.810 [2024-11-27 05:26:50.571693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1567845 ] 00:05:02.810 [2024-11-27 05:26:50.647567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.810 [2024-11-27 05:26:50.689586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.069 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.069 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:03.069 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:03.069 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.069 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.069 [2024-11-27 05:26:50.912311] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:03.069 request: 00:05:03.069 { 00:05:03.070 "trtype": "tcp", 00:05:03.070 "method": "nvmf_get_transports", 00:05:03.070 "req_id": 1 00:05:03.070 } 00:05:03.070 Got JSON-RPC error response 00:05:03.070 response: 00:05:03.070 { 00:05:03.070 "code": -19, 00:05:03.070 "message": "No such device" 00:05:03.070 } 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.070 [2024-11-27 05:26:50.924412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.070 05:26:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.329 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.329 05:26:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.329 { 00:05:03.329 "subsystems": [ 00:05:03.329 { 00:05:03.329 "subsystem": "fsdev", 00:05:03.329 "config": [ 00:05:03.329 { 00:05:03.329 "method": "fsdev_set_opts", 00:05:03.329 "params": { 00:05:03.329 "fsdev_io_pool_size": 65535, 00:05:03.329 "fsdev_io_cache_size": 256 00:05:03.329 } 00:05:03.329 } 00:05:03.329 ] 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "vfio_user_target", 00:05:03.329 "config": null 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "keyring", 00:05:03.329 "config": [] 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "iobuf", 00:05:03.329 "config": [ 00:05:03.329 { 00:05:03.329 "method": "iobuf_set_options", 00:05:03.329 "params": { 00:05:03.329 "small_pool_count": 8192, 00:05:03.329 "large_pool_count": 1024, 00:05:03.329 "small_bufsize": 8192, 00:05:03.329 "large_bufsize": 135168, 00:05:03.329 "enable_numa": false 00:05:03.329 } 00:05:03.329 } 00:05:03.329 ] 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "sock", 00:05:03.329 "config": [ 00:05:03.329 { 00:05:03.329 "method": "sock_set_default_impl", 00:05:03.329 "params": { 00:05:03.329 "impl_name": "posix" 00:05:03.329 } 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "method": "sock_impl_set_options", 00:05:03.329 "params": { 00:05:03.329 "impl_name": "ssl", 00:05:03.329 "recv_buf_size": 4096, 00:05:03.329 "send_buf_size": 4096, 00:05:03.329 "enable_recv_pipe": true, 00:05:03.329 "enable_quickack": false, 00:05:03.329 "enable_placement_id": 0, 00:05:03.329 "enable_zerocopy_send_server": true, 00:05:03.329 "enable_zerocopy_send_client": false, 00:05:03.329 "zerocopy_threshold": 0, 00:05:03.329 "tls_version": 0, 00:05:03.329 "enable_ktls": false 00:05:03.329 } 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "method": "sock_impl_set_options", 00:05:03.329 "params": { 00:05:03.329 "impl_name": "posix", 00:05:03.329 "recv_buf_size": 2097152, 00:05:03.329 "send_buf_size": 2097152, 00:05:03.329 "enable_recv_pipe": true, 00:05:03.329 "enable_quickack": false, 00:05:03.329 "enable_placement_id": 0, 00:05:03.329 "enable_zerocopy_send_server": true, 00:05:03.329 "enable_zerocopy_send_client": false, 00:05:03.329 "zerocopy_threshold": 0, 00:05:03.329 "tls_version": 0, 00:05:03.329 "enable_ktls": false 00:05:03.329 } 00:05:03.329 } 00:05:03.329 ] 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "vmd", 00:05:03.329 "config": [] 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "accel", 00:05:03.329 "config": [ 00:05:03.329 { 00:05:03.329 "method": "accel_set_options", 00:05:03.329 "params": { 00:05:03.329 "small_cache_size": 128, 00:05:03.329 "large_cache_size": 16, 00:05:03.329 "task_count": 2048, 00:05:03.329 "sequence_count": 2048, 00:05:03.329 "buf_count": 2048 00:05:03.329 } 00:05:03.329 } 00:05:03.329 ] 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "bdev", 00:05:03.329 "config": [ 00:05:03.329 { 00:05:03.329 "method": "bdev_set_options", 00:05:03.329 "params": { 00:05:03.329 "bdev_io_pool_size": 65535, 00:05:03.329 "bdev_io_cache_size": 256, 00:05:03.329 "bdev_auto_examine": true, 00:05:03.329 "iobuf_small_cache_size": 128, 00:05:03.329 "iobuf_large_cache_size": 16 00:05:03.329 } 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "method": "bdev_raid_set_options", 00:05:03.329 "params": { 00:05:03.329 "process_window_size_kb": 1024, 00:05:03.329 "process_max_bandwidth_mb_sec": 0 00:05:03.329 } 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "method": "bdev_iscsi_set_options", 00:05:03.329 "params": { 00:05:03.329 "timeout_sec": 30 00:05:03.329 } 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "method": "bdev_nvme_set_options", 00:05:03.329 "params": { 00:05:03.329 "action_on_timeout": "none", 00:05:03.329 "timeout_us": 0, 00:05:03.329 "timeout_admin_us": 0, 00:05:03.329 "keep_alive_timeout_ms": 10000, 00:05:03.329 "arbitration_burst": 0, 00:05:03.329 "low_priority_weight": 0, 00:05:03.329 "medium_priority_weight": 0, 00:05:03.329 "high_priority_weight": 0, 00:05:03.329 "nvme_adminq_poll_period_us": 10000, 00:05:03.329 "nvme_ioq_poll_period_us": 0, 00:05:03.329 "io_queue_requests": 0, 00:05:03.329 "delay_cmd_submit": true, 00:05:03.329 "transport_retry_count": 4, 00:05:03.329 "bdev_retry_count": 3, 00:05:03.329 "transport_ack_timeout": 0, 00:05:03.329 "ctrlr_loss_timeout_sec": 0, 00:05:03.329 "reconnect_delay_sec": 0, 00:05:03.329 "fast_io_fail_timeout_sec": 0, 00:05:03.329 "disable_auto_failback": false, 00:05:03.329 "generate_uuids": false, 00:05:03.329 "transport_tos": 0, 00:05:03.329 "nvme_error_stat": false, 00:05:03.329 "rdma_srq_size": 0, 00:05:03.329 "io_path_stat": false, 00:05:03.329 "allow_accel_sequence": false, 00:05:03.329 "rdma_max_cq_size": 0, 00:05:03.329 "rdma_cm_event_timeout_ms": 0, 00:05:03.329 "dhchap_digests": [ 00:05:03.329 "sha256", 00:05:03.329 "sha384", 00:05:03.329 "sha512" 00:05:03.329 ], 00:05:03.329 "dhchap_dhgroups": [ 00:05:03.329 "null", 00:05:03.329 "ffdhe2048", 00:05:03.329 "ffdhe3072", 00:05:03.329 "ffdhe4096", 00:05:03.329 "ffdhe6144", 00:05:03.329 "ffdhe8192" 00:05:03.329 ] 00:05:03.329 } 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "method": "bdev_nvme_set_hotplug", 00:05:03.329 "params": { 00:05:03.329 "period_us": 100000, 00:05:03.329 "enable": false 00:05:03.329 } 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "method": "bdev_wait_for_examine" 00:05:03.329 } 00:05:03.329 ] 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "scsi", 00:05:03.329 "config": null 00:05:03.329 }, 00:05:03.329 { 00:05:03.329 "subsystem": "scheduler", 00:05:03.329 "config": [ 00:05:03.329 { 00:05:03.329 "method": "framework_set_scheduler", 00:05:03.329 "params": { 00:05:03.329 "name": "static" 00:05:03.329 } 00:05:03.329 } 00:05:03.329 ] 00:05:03.329 }, 00:05:03.329 { 00:05:03.330 "subsystem": "vhost_scsi", 00:05:03.330 "config": [] 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "subsystem": "vhost_blk", 00:05:03.330 "config": [] 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "subsystem": "ublk", 00:05:03.330 "config": [] 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "subsystem": "nbd", 00:05:03.330 "config": [] 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "subsystem": "nvmf", 00:05:03.330 "config": [ 00:05:03.330 { 00:05:03.330 "method": "nvmf_set_config", 00:05:03.330 "params": { 00:05:03.330 "discovery_filter": "match_any", 00:05:03.330 "admin_cmd_passthru": { 00:05:03.330 "identify_ctrlr": false 00:05:03.330 }, 00:05:03.330 "dhchap_digests": [ 00:05:03.330 "sha256", 00:05:03.330 "sha384", 00:05:03.330 "sha512" 00:05:03.330 ], 00:05:03.330 "dhchap_dhgroups": [ 00:05:03.330 "null", 00:05:03.330 "ffdhe2048", 00:05:03.330 "ffdhe3072", 00:05:03.330 "ffdhe4096", 00:05:03.330 "ffdhe6144", 00:05:03.330 "ffdhe8192" 00:05:03.330 ] 00:05:03.330 } 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "method": "nvmf_set_max_subsystems", 00:05:03.330 "params": { 00:05:03.330 "max_subsystems": 1024 00:05:03.330 } 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "method": "nvmf_set_crdt", 00:05:03.330 "params": { 00:05:03.330 "crdt1": 0, 00:05:03.330 "crdt2": 0, 00:05:03.330 "crdt3": 0 00:05:03.330 } 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "method": "nvmf_create_transport", 00:05:03.330 "params": { 00:05:03.330 "trtype": "TCP", 00:05:03.330 "max_queue_depth": 128, 00:05:03.330 "max_io_qpairs_per_ctrlr": 127, 00:05:03.330 "in_capsule_data_size": 4096, 00:05:03.330 "max_io_size": 131072, 00:05:03.330 "io_unit_size": 131072, 00:05:03.330 "max_aq_depth": 128, 00:05:03.330 "num_shared_buffers": 511, 00:05:03.330 "buf_cache_size": 4294967295, 00:05:03.330 "dif_insert_or_strip": false, 00:05:03.330 "zcopy": false, 00:05:03.330 "c2h_success": true, 00:05:03.330 "sock_priority": 0, 00:05:03.330 "abort_timeout_sec": 1, 00:05:03.330 "ack_timeout": 0, 00:05:03.330 "data_wr_pool_size": 0 00:05:03.330 } 00:05:03.330 } 00:05:03.330 ] 00:05:03.330 }, 00:05:03.330 { 00:05:03.330 "subsystem": "iscsi", 00:05:03.330 "config": [ 00:05:03.330 { 00:05:03.330 "method": "iscsi_set_options", 00:05:03.330 "params": { 00:05:03.330 "node_base": "iqn.2016-06.io.spdk", 00:05:03.330 "max_sessions": 128, 00:05:03.330 "max_connections_per_session": 2, 00:05:03.330 "max_queue_depth": 64, 00:05:03.330 "default_time2wait": 2, 00:05:03.330 "default_time2retain": 20, 00:05:03.330 "first_burst_length": 8192, 00:05:03.330 "immediate_data": true, 00:05:03.330 "allow_duplicated_isid": false, 00:05:03.330 "error_recovery_level": 0, 00:05:03.330 "nop_timeout": 60, 00:05:03.330 "nop_in_interval": 30, 00:05:03.330 "disable_chap": false, 00:05:03.330 "require_chap": false, 00:05:03.330 "mutual_chap": false, 00:05:03.330 "chap_group": 0, 00:05:03.330 "max_large_datain_per_connection": 64, 00:05:03.330 "max_r2t_per_connection": 4, 00:05:03.330 "pdu_pool_size": 36864, 00:05:03.330 "immediate_data_pool_size": 16384, 00:05:03.330 "data_out_pool_size": 2048 00:05:03.330 } 00:05:03.330 } 00:05:03.330 ] 00:05:03.330 } 00:05:03.330 ] 00:05:03.330 } 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1567845 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1567845 ']' 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1567845 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1567845 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1567845' 00:05:03.330 killing process with pid 1567845 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1567845 00:05:03.330 05:26:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1567845 00:05:03.589 05:26:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1567942 00:05:03.589 05:26:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.589 05:26:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1567942 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1567942 ']' 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1567942 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1567942 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1567942' 00:05:08.862 killing process with pid 1567942 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1567942 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1567942 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.862 00:05:08.862 real 0m6.292s 00:05:08.862 user 0m6.009s 00:05:08.862 sys 0m0.595s 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.862 05:26:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.862 ************************************ 00:05:08.862 END TEST skip_rpc_with_json 00:05:08.862 ************************************ 00:05:08.862 05:26:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:08.862 05:26:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.862 05:26:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.862 05:26:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.121 ************************************ 00:05:09.121 START TEST skip_rpc_with_delay 00:05:09.121 ************************************ 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.121 [2024-11-27 05:26:56.943177] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.121 00:05:09.121 real 0m0.072s 00:05:09.121 user 0m0.046s 00:05:09.121 sys 0m0.025s 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.121 05:26:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:09.121 ************************************ 00:05:09.121 END TEST skip_rpc_with_delay 00:05:09.121 ************************************ 00:05:09.121 05:26:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:09.121 05:26:56 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:09.121 05:26:56 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:09.121 05:26:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.121 05:26:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.121 05:26:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.121 ************************************ 00:05:09.121 START TEST exit_on_failed_rpc_init 00:05:09.121 ************************************ 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1568971 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1568971 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1568971 ']' 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.121 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.121 [2024-11-27 05:26:57.080899] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:09.122 [2024-11-27 05:26:57.080953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568971 ] 00:05:09.380 [2024-11-27 05:26:57.155442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.380 [2024-11-27 05:26:57.195957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.640 [2024-11-27 05:26:57.472916] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:09.640 [2024-11-27 05:26:57.472957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569066 ] 00:05:09.640 [2024-11-27 05:26:57.547750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.640 [2024-11-27 05:26:57.588170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.640 [2024-11-27 05:26:57.588228] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:09.640 [2024-11-27 05:26:57.588250] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:09.640 [2024-11-27 05:26:57.588258] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1568971 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1568971 ']' 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1568971 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:09.640 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.900 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1568971 00:05:09.900 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.900 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.900 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1568971' 00:05:09.900 killing process with pid 1568971 00:05:09.900 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1568971 00:05:09.900 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1568971 00:05:10.159 00:05:10.159 real 0m0.956s 00:05:10.159 user 0m1.022s 00:05:10.159 sys 0m0.394s 00:05:10.159 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.159 05:26:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 ************************************ 00:05:10.159 END TEST exit_on_failed_rpc_init 00:05:10.159 ************************************ 00:05:10.159 05:26:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.159 00:05:10.159 real 0m13.149s 00:05:10.159 user 0m12.407s 00:05:10.159 sys 0m1.581s 00:05:10.159 05:26:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.159 05:26:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 ************************************ 00:05:10.159 END TEST skip_rpc 00:05:10.159 ************************************ 00:05:10.159 05:26:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.159 05:26:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.159 05:26:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.159 05:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:10.159 ************************************ 00:05:10.159 START TEST rpc_client 00:05:10.159 ************************************ 00:05:10.159 05:26:58 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.419 * Looking for test storage... 00:05:10.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.419 05:26:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.419 --rc genhtml_branch_coverage=1 00:05:10.419 --rc genhtml_function_coverage=1 00:05:10.419 --rc genhtml_legend=1 00:05:10.419 --rc geninfo_all_blocks=1 00:05:10.419 --rc geninfo_unexecuted_blocks=1 00:05:10.419 00:05:10.419 ' 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.419 --rc genhtml_branch_coverage=1 00:05:10.419 --rc genhtml_function_coverage=1 00:05:10.419 --rc genhtml_legend=1 00:05:10.419 --rc geninfo_all_blocks=1 00:05:10.419 --rc geninfo_unexecuted_blocks=1 00:05:10.419 00:05:10.419 ' 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.419 --rc genhtml_branch_coverage=1 00:05:10.419 --rc genhtml_function_coverage=1 00:05:10.419 --rc genhtml_legend=1 00:05:10.419 --rc geninfo_all_blocks=1 00:05:10.419 --rc geninfo_unexecuted_blocks=1 00:05:10.419 00:05:10.419 ' 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.419 --rc genhtml_branch_coverage=1 00:05:10.419 --rc genhtml_function_coverage=1 00:05:10.419 --rc genhtml_legend=1 00:05:10.419 --rc geninfo_all_blocks=1 00:05:10.419 --rc geninfo_unexecuted_blocks=1 00:05:10.419 00:05:10.419 ' 00:05:10.419 05:26:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:10.419 OK 00:05:10.419 05:26:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.419 00:05:10.419 real 0m0.191s 00:05:10.419 user 0m0.109s 00:05:10.419 sys 0m0.096s 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.419 05:26:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:10.419 ************************************ 00:05:10.419 END TEST rpc_client 00:05:10.419 ************************************ 00:05:10.419 05:26:58 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.419 05:26:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.419 05:26:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.419 05:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:10.419 ************************************ 00:05:10.419 START TEST json_config 00:05:10.419 ************************************ 00:05:10.419 05:26:58 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.419 05:26:58 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.419 05:26:58 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.419 05:26:58 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.678 05:26:58 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.679 05:26:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.679 05:26:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.679 05:26:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.679 05:26:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.679 05:26:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.679 05:26:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.679 05:26:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.679 05:26:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:10.679 05:26:58 json_config -- scripts/common.sh@345 -- # : 1 00:05:10.679 05:26:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.679 05:26:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.679 05:26:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:10.679 05:26:58 json_config -- scripts/common.sh@353 -- # local d=1 00:05:10.679 05:26:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.679 05:26:58 json_config -- scripts/common.sh@355 -- # echo 1 00:05:10.679 05:26:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.679 05:26:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@353 -- # local d=2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.679 05:26:58 json_config -- scripts/common.sh@355 -- # echo 2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.679 05:26:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.679 05:26:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.679 05:26:58 json_config -- scripts/common.sh@368 -- # return 0 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.679 --rc genhtml_branch_coverage=1 00:05:10.679 --rc genhtml_function_coverage=1 00:05:10.679 --rc genhtml_legend=1 00:05:10.679 --rc geninfo_all_blocks=1 00:05:10.679 --rc geninfo_unexecuted_blocks=1 00:05:10.679 00:05:10.679 ' 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.679 --rc genhtml_branch_coverage=1 00:05:10.679 --rc genhtml_function_coverage=1 00:05:10.679 --rc genhtml_legend=1 00:05:10.679 --rc geninfo_all_blocks=1 00:05:10.679 --rc geninfo_unexecuted_blocks=1 00:05:10.679 00:05:10.679 ' 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.679 --rc genhtml_branch_coverage=1 00:05:10.679 --rc genhtml_function_coverage=1 00:05:10.679 --rc genhtml_legend=1 00:05:10.679 --rc geninfo_all_blocks=1 00:05:10.679 --rc geninfo_unexecuted_blocks=1 00:05:10.679 00:05:10.679 ' 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.679 --rc genhtml_branch_coverage=1 00:05:10.679 --rc genhtml_function_coverage=1 00:05:10.679 --rc genhtml_legend=1 00:05:10.679 --rc geninfo_all_blocks=1 00:05:10.679 --rc geninfo_unexecuted_blocks=1 00:05:10.679 00:05:10.679 ' 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.679 05:26:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.679 05:26:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.679 05:26:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.679 05:26:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.679 05:26:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.679 05:26:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.679 05:26:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.679 05:26:58 json_config -- paths/export.sh@5 -- # export PATH 00:05:10.679 05:26:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@51 -- # : 0 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.679 05:26:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:10.679 INFO: JSON configuration test init 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.679 05:26:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.679 05:26:58 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:10.679 05:26:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:10.679 05:26:58 json_config -- json_config/common.sh@10 -- # shift 00:05:10.680 05:26:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.680 05:26:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.680 05:26:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.680 05:26:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.680 05:26:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.680 05:26:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1569418 00:05:10.680 05:26:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.680 Waiting for target to run... 00:05:10.680 05:26:58 json_config -- json_config/common.sh@25 -- # waitforlisten 1569418 /var/tmp/spdk_tgt.sock 00:05:10.680 05:26:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 1569418 ']' 00:05:10.680 05:26:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.680 05:26:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:10.680 05:26:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.680 05:26:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.680 05:26:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.680 05:26:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.680 [2024-11-27 05:26:58.596528] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:10.680 [2024-11-27 05:26:58.596576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1569418 ] 00:05:11.247 [2024-11-27 05:26:59.047624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.247 [2024-11-27 05:26:59.102102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.505 05:26:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.505 05:26:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:11.505 05:26:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.505 00:05:11.505 05:26:59 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:11.505 05:26:59 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:11.505 05:26:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.505 05:26:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.505 05:26:59 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:11.505 05:26:59 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:11.505 05:26:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.505 05:26:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.505 05:26:59 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:11.505 05:26:59 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:11.505 05:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:14.787 05:27:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.787 05:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:14.787 05:27:02 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:14.788 05:27:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@54 -- # sort 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:14.788 05:27:02 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:14.788 05:27:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.788 05:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:15.046 05:27:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.046 05:27:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:15.046 05:27:02 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.046 05:27:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.046 MallocForNvmf0 00:05:15.046 05:27:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.046 05:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.305 MallocForNvmf1 00:05:15.305 05:27:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.305 05:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.563 [2024-11-27 05:27:03.379735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.563 05:27:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.563 05:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.822 05:27:03 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.822 05:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.822 05:27:03 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.822 05:27:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.081 05:27:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.081 05:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.340 [2024-11-27 05:27:04.186211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.340 05:27:04 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:16.340 05:27:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.340 05:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.340 05:27:04 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:16.340 05:27:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.340 05:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.340 05:27:04 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:16.340 05:27:04 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.340 05:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.599 MallocBdevForConfigChangeCheck 00:05:16.599 05:27:04 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:16.599 05:27:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.599 05:27:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.599 05:27:04 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:16.600 05:27:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.858 05:27:04 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:16.858 INFO: shutting down applications... 00:05:16.858 05:27:04 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:16.858 05:27:04 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:16.858 05:27:04 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:16.858 05:27:04 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.436 Calling clear_iscsi_subsystem 00:05:19.436 Calling clear_nvmf_subsystem 00:05:19.436 Calling clear_nbd_subsystem 00:05:19.436 Calling clear_ublk_subsystem 00:05:19.436 Calling clear_vhost_blk_subsystem 00:05:19.436 Calling clear_vhost_scsi_subsystem 00:05:19.436 Calling clear_bdev_subsystem 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@352 -- # break 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:19.436 05:27:07 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:19.436 05:27:07 json_config -- json_config/common.sh@31 -- # local app=target 00:05:19.436 05:27:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.436 05:27:07 json_config -- json_config/common.sh@35 -- # [[ -n 1569418 ]] 00:05:19.436 05:27:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1569418 00:05:19.436 05:27:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.436 05:27:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.436 05:27:07 json_config -- json_config/common.sh@41 -- # kill -0 1569418 00:05:19.436 05:27:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.004 05:27:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.004 05:27:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.004 05:27:07 json_config -- json_config/common.sh@41 -- # kill -0 1569418 00:05:20.004 05:27:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:20.004 05:27:07 json_config -- json_config/common.sh@43 -- # break 00:05:20.004 05:27:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:20.004 05:27:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:20.004 SPDK target shutdown done 00:05:20.004 05:27:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:20.004 INFO: relaunching applications... 00:05:20.004 05:27:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.004 05:27:07 json_config -- json_config/common.sh@9 -- # local app=target 00:05:20.004 05:27:07 json_config -- json_config/common.sh@10 -- # shift 00:05:20.004 05:27:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.004 05:27:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.004 05:27:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.004 05:27:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.004 05:27:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.004 05:27:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1571125 00:05:20.004 05:27:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.004 Waiting for target to run... 00:05:20.004 05:27:07 json_config -- json_config/common.sh@25 -- # waitforlisten 1571125 /var/tmp/spdk_tgt.sock 00:05:20.004 05:27:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.004 05:27:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 1571125 ']' 00:05:20.004 05:27:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.004 05:27:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.004 05:27:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.004 05:27:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.004 05:27:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.004 [2024-11-27 05:27:07.962065] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:20.004 [2024-11-27 05:27:07.962125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571125 ] 00:05:20.263 [2024-11-27 05:27:08.261010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.522 [2024-11-27 05:27:08.293150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.811 [2024-11-27 05:27:11.321594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.811 [2024-11-27 05:27:11.353966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.811 05:27:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.811 05:27:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:23.811 05:27:11 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.811 00:05:23.811 05:27:11 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:23.811 05:27:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:23.811 INFO: Checking if target configuration is the same... 00:05:23.811 05:27:11 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.811 05:27:11 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:23.811 05:27:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.811 + '[' 2 -ne 2 ']' 00:05:23.811 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.811 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.811 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.811 +++ basename /dev/fd/62 00:05:23.811 ++ mktemp /tmp/62.XXX 00:05:23.811 + tmp_file_1=/tmp/62.361 00:05:23.811 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.811 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.811 + tmp_file_2=/tmp/spdk_tgt_config.json.KIW 00:05:23.811 + ret=0 00:05:23.811 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.811 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.811 + diff -u /tmp/62.361 /tmp/spdk_tgt_config.json.KIW 00:05:23.811 + echo 'INFO: JSON config files are the same' 00:05:23.811 INFO: JSON config files are the same 00:05:23.811 + rm /tmp/62.361 /tmp/spdk_tgt_config.json.KIW 00:05:23.811 + exit 0 00:05:23.811 05:27:11 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:23.811 05:27:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.811 INFO: changing configuration and checking if this can be detected... 00:05:23.811 05:27:11 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.811 05:27:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.070 05:27:11 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:24.070 05:27:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.070 05:27:11 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.070 + '[' 2 -ne 2 ']' 00:05:24.070 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.070 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.070 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.070 +++ basename /dev/fd/62 00:05:24.070 ++ mktemp /tmp/62.XXX 00:05:24.070 + tmp_file_1=/tmp/62.5Jj 00:05:24.070 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.070 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.070 + tmp_file_2=/tmp/spdk_tgt_config.json.Zwh 00:05:24.070 + ret=0 00:05:24.070 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.639 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.639 + diff -u /tmp/62.5Jj /tmp/spdk_tgt_config.json.Zwh 00:05:24.639 + ret=1 00:05:24.639 + echo '=== Start of file: /tmp/62.5Jj ===' 00:05:24.639 + cat /tmp/62.5Jj 00:05:24.639 + echo '=== End of file: /tmp/62.5Jj ===' 00:05:24.639 + echo '' 00:05:24.639 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Zwh ===' 00:05:24.639 + cat /tmp/spdk_tgt_config.json.Zwh 00:05:24.639 + echo '=== End of file: /tmp/spdk_tgt_config.json.Zwh ===' 00:05:24.639 + echo '' 00:05:24.639 + rm /tmp/62.5Jj /tmp/spdk_tgt_config.json.Zwh 00:05:24.639 + exit 1 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.639 INFO: configuration change detected. 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@324 -- # [[ -n 1571125 ]] 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.639 05:27:12 json_config -- json_config/json_config.sh@330 -- # killprocess 1571125 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@954 -- # '[' -z 1571125 ']' 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@958 -- # kill -0 1571125 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@959 -- # uname 00:05:24.639 05:27:12 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.640 05:27:12 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1571125 00:05:24.640 05:27:12 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.640 05:27:12 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.640 05:27:12 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1571125' 00:05:24.640 killing process with pid 1571125 00:05:24.640 05:27:12 json_config -- common/autotest_common.sh@973 -- # kill 1571125 00:05:24.640 05:27:12 json_config -- common/autotest_common.sh@978 -- # wait 1571125 00:05:27.178 05:27:14 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.178 05:27:14 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:27.178 05:27:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.178 05:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.178 05:27:14 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:27.178 05:27:14 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:27.178 INFO: Success 00:05:27.178 00:05:27.178 real 0m16.279s 00:05:27.178 user 0m16.734s 00:05:27.178 sys 0m2.565s 00:05:27.178 05:27:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.178 05:27:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.178 ************************************ 00:05:27.178 END TEST json_config 00:05:27.178 ************************************ 00:05:27.178 05:27:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.178 05:27:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.178 05:27:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.178 05:27:14 -- common/autotest_common.sh@10 -- # set +x 00:05:27.178 ************************************ 00:05:27.178 START TEST json_config_extra_key 00:05:27.178 ************************************ 00:05:27.178 05:27:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:27.178 05:27:14 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.178 05:27:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.178 05:27:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.178 05:27:14 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.178 05:27:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:27.178 05:27:14 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.178 05:27:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.178 --rc genhtml_branch_coverage=1 00:05:27.178 --rc genhtml_function_coverage=1 00:05:27.178 --rc genhtml_legend=1 00:05:27.178 --rc geninfo_all_blocks=1 00:05:27.179 --rc geninfo_unexecuted_blocks=1 00:05:27.179 00:05:27.179 ' 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.179 --rc genhtml_branch_coverage=1 00:05:27.179 --rc genhtml_function_coverage=1 00:05:27.179 --rc genhtml_legend=1 00:05:27.179 --rc geninfo_all_blocks=1 00:05:27.179 --rc geninfo_unexecuted_blocks=1 00:05:27.179 00:05:27.179 ' 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.179 --rc genhtml_branch_coverage=1 00:05:27.179 --rc genhtml_function_coverage=1 00:05:27.179 --rc genhtml_legend=1 00:05:27.179 --rc geninfo_all_blocks=1 00:05:27.179 --rc geninfo_unexecuted_blocks=1 00:05:27.179 00:05:27.179 ' 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.179 --rc genhtml_branch_coverage=1 00:05:27.179 --rc genhtml_function_coverage=1 00:05:27.179 --rc genhtml_legend=1 00:05:27.179 --rc geninfo_all_blocks=1 00:05:27.179 --rc geninfo_unexecuted_blocks=1 00:05:27.179 00:05:27.179 ' 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.179 05:27:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.179 05:27:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.179 05:27:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.179 05:27:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.179 05:27:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.179 05:27:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.179 05:27:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.179 05:27:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:27.179 05:27:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.179 05:27:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:27.179 INFO: launching applications... 00:05:27.179 05:27:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1572842 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.179 Waiting for target to run... 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1572842 /var/tmp/spdk_tgt.sock 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1572842 ']' 00:05:27.179 05:27:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.179 05:27:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 [2024-11-27 05:27:14.942304] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:27.179 [2024-11-27 05:27:14.942356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572842 ] 00:05:27.439 [2024-11-27 05:27:15.223311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.439 [2024-11-27 05:27:15.256855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.008 05:27:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.008 05:27:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:28.008 00:05:28.008 05:27:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:28.008 INFO: shutting down applications... 00:05:28.008 05:27:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1572842 ]] 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1572842 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1572842 00:05:28.008 05:27:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.267 05:27:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.267 05:27:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.267 05:27:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1572842 00:05:28.267 05:27:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.267 05:27:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.267 05:27:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.267 05:27:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.267 SPDK target shutdown done 00:05:28.267 05:27:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.267 Success 00:05:28.267 00:05:28.267 real 0m1.570s 00:05:28.267 user 0m1.352s 00:05:28.267 sys 0m0.395s 00:05:28.526 05:27:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.526 05:27:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.526 ************************************ 00:05:28.526 END TEST json_config_extra_key 00:05:28.526 ************************************ 00:05:28.526 05:27:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.526 05:27:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.526 05:27:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.526 05:27:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.526 ************************************ 00:05:28.526 START TEST alias_rpc 00:05:28.526 ************************************ 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.526 * Looking for test storage... 00:05:28.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.526 05:27:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.526 --rc genhtml_branch_coverage=1 00:05:28.526 --rc genhtml_function_coverage=1 00:05:28.526 --rc genhtml_legend=1 00:05:28.526 --rc geninfo_all_blocks=1 00:05:28.526 --rc geninfo_unexecuted_blocks=1 00:05:28.526 00:05:28.526 ' 00:05:28.526 05:27:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.527 --rc genhtml_branch_coverage=1 00:05:28.527 --rc genhtml_function_coverage=1 00:05:28.527 --rc genhtml_legend=1 00:05:28.527 --rc geninfo_all_blocks=1 00:05:28.527 --rc geninfo_unexecuted_blocks=1 00:05:28.527 00:05:28.527 ' 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.527 --rc genhtml_branch_coverage=1 00:05:28.527 --rc genhtml_function_coverage=1 00:05:28.527 --rc genhtml_legend=1 00:05:28.527 --rc geninfo_all_blocks=1 00:05:28.527 --rc geninfo_unexecuted_blocks=1 00:05:28.527 00:05:28.527 ' 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.527 --rc genhtml_branch_coverage=1 00:05:28.527 --rc genhtml_function_coverage=1 00:05:28.527 --rc genhtml_legend=1 00:05:28.527 --rc geninfo_all_blocks=1 00:05:28.527 --rc geninfo_unexecuted_blocks=1 00:05:28.527 00:05:28.527 ' 00:05:28.527 05:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:28.527 05:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1573238 00:05:28.527 05:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.527 05:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1573238 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1573238 ']' 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.527 05:27:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.785 [2024-11-27 05:27:16.573851] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:28.785 [2024-11-27 05:27:16.573895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573238 ] 00:05:28.785 [2024-11-27 05:27:16.634874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.785 [2024-11-27 05:27:16.678130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.043 05:27:16 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.043 05:27:16 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.043 05:27:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:29.301 05:27:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1573238 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1573238 ']' 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1573238 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573238 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573238' 00:05:29.301 killing process with pid 1573238 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 1573238 00:05:29.301 05:27:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 1573238 00:05:29.560 00:05:29.560 real 0m1.117s 00:05:29.560 user 0m1.150s 00:05:29.560 sys 0m0.408s 00:05:29.560 05:27:17 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.560 05:27:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.560 ************************************ 00:05:29.560 END TEST alias_rpc 00:05:29.560 ************************************ 00:05:29.560 05:27:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:29.560 05:27:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.560 05:27:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.560 05:27:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.560 05:27:17 -- common/autotest_common.sh@10 -- # set +x 00:05:29.560 ************************************ 00:05:29.560 START TEST spdkcli_tcp 00:05:29.560 ************************************ 00:05:29.560 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.819 * Looking for test storage... 00:05:29.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:29.819 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.819 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.819 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.819 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.820 05:27:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.820 --rc genhtml_branch_coverage=1 00:05:29.820 --rc genhtml_function_coverage=1 00:05:29.820 --rc genhtml_legend=1 00:05:29.820 --rc geninfo_all_blocks=1 00:05:29.820 --rc geninfo_unexecuted_blocks=1 00:05:29.820 00:05:29.820 ' 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.820 --rc genhtml_branch_coverage=1 00:05:29.820 --rc genhtml_function_coverage=1 00:05:29.820 --rc genhtml_legend=1 00:05:29.820 --rc geninfo_all_blocks=1 00:05:29.820 --rc geninfo_unexecuted_blocks=1 00:05:29.820 00:05:29.820 ' 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.820 --rc genhtml_branch_coverage=1 00:05:29.820 --rc genhtml_function_coverage=1 00:05:29.820 --rc genhtml_legend=1 00:05:29.820 --rc geninfo_all_blocks=1 00:05:29.820 --rc geninfo_unexecuted_blocks=1 00:05:29.820 00:05:29.820 ' 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.820 --rc genhtml_branch_coverage=1 00:05:29.820 --rc genhtml_function_coverage=1 00:05:29.820 --rc genhtml_legend=1 00:05:29.820 --rc geninfo_all_blocks=1 00:05:29.820 --rc geninfo_unexecuted_blocks=1 00:05:29.820 00:05:29.820 ' 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1573450 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.820 05:27:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1573450 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1573450 ']' 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.820 05:27:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.820 [2024-11-27 05:27:17.765045] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:29.820 [2024-11-27 05:27:17.765096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573450 ] 00:05:30.079 [2024-11-27 05:27:17.840561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.079 [2024-11-27 05:27:17.881565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.079 [2024-11-27 05:27:17.881567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.339 05:27:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.339 05:27:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:30.339 05:27:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1573531 00:05:30.339 05:27:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:30.339 05:27:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:30.339 [ 00:05:30.339 "bdev_malloc_delete", 00:05:30.339 "bdev_malloc_create", 00:05:30.339 "bdev_null_resize", 00:05:30.339 "bdev_null_delete", 00:05:30.339 "bdev_null_create", 00:05:30.339 "bdev_nvme_cuse_unregister", 00:05:30.339 "bdev_nvme_cuse_register", 00:05:30.339 "bdev_opal_new_user", 00:05:30.339 "bdev_opal_set_lock_state", 00:05:30.339 "bdev_opal_delete", 00:05:30.339 "bdev_opal_get_info", 00:05:30.339 "bdev_opal_create", 00:05:30.339 "bdev_nvme_opal_revert", 00:05:30.340 "bdev_nvme_opal_init", 00:05:30.340 "bdev_nvme_send_cmd", 00:05:30.340 "bdev_nvme_set_keys", 00:05:30.340 "bdev_nvme_get_path_iostat", 00:05:30.340 "bdev_nvme_get_mdns_discovery_info", 00:05:30.340 "bdev_nvme_stop_mdns_discovery", 00:05:30.340 "bdev_nvme_start_mdns_discovery", 00:05:30.340 "bdev_nvme_set_multipath_policy", 00:05:30.340 "bdev_nvme_set_preferred_path", 00:05:30.340 "bdev_nvme_get_io_paths", 00:05:30.340 "bdev_nvme_remove_error_injection", 00:05:30.340 "bdev_nvme_add_error_injection", 00:05:30.340 "bdev_nvme_get_discovery_info", 00:05:30.340 "bdev_nvme_stop_discovery", 00:05:30.340 "bdev_nvme_start_discovery", 00:05:30.340 "bdev_nvme_get_controller_health_info", 00:05:30.340 "bdev_nvme_disable_controller", 00:05:30.340 "bdev_nvme_enable_controller", 00:05:30.340 "bdev_nvme_reset_controller", 00:05:30.340 "bdev_nvme_get_transport_statistics", 00:05:30.340 "bdev_nvme_apply_firmware", 00:05:30.340 "bdev_nvme_detach_controller", 00:05:30.340 "bdev_nvme_get_controllers", 00:05:30.340 "bdev_nvme_attach_controller", 00:05:30.340 "bdev_nvme_set_hotplug", 00:05:30.340 "bdev_nvme_set_options", 00:05:30.340 "bdev_passthru_delete", 00:05:30.340 "bdev_passthru_create", 00:05:30.340 "bdev_lvol_set_parent_bdev", 00:05:30.340 "bdev_lvol_set_parent", 00:05:30.340 "bdev_lvol_check_shallow_copy", 00:05:30.340 "bdev_lvol_start_shallow_copy", 00:05:30.340 "bdev_lvol_grow_lvstore", 00:05:30.340 "bdev_lvol_get_lvols", 00:05:30.340 "bdev_lvol_get_lvstores", 00:05:30.340 "bdev_lvol_delete", 00:05:30.340 "bdev_lvol_set_read_only", 00:05:30.340 "bdev_lvol_resize", 00:05:30.340 "bdev_lvol_decouple_parent", 00:05:30.340 "bdev_lvol_inflate", 00:05:30.340 "bdev_lvol_rename", 00:05:30.340 "bdev_lvol_clone_bdev", 00:05:30.340 "bdev_lvol_clone", 00:05:30.340 "bdev_lvol_snapshot", 00:05:30.340 "bdev_lvol_create", 00:05:30.340 "bdev_lvol_delete_lvstore", 00:05:30.340 "bdev_lvol_rename_lvstore", 00:05:30.340 "bdev_lvol_create_lvstore", 00:05:30.340 "bdev_raid_set_options", 00:05:30.340 "bdev_raid_remove_base_bdev", 00:05:30.340 "bdev_raid_add_base_bdev", 00:05:30.340 "bdev_raid_delete", 00:05:30.340 "bdev_raid_create", 00:05:30.340 "bdev_raid_get_bdevs", 00:05:30.340 "bdev_error_inject_error", 00:05:30.340 "bdev_error_delete", 00:05:30.340 "bdev_error_create", 00:05:30.340 "bdev_split_delete", 00:05:30.340 "bdev_split_create", 00:05:30.340 "bdev_delay_delete", 00:05:30.340 "bdev_delay_create", 00:05:30.340 "bdev_delay_update_latency", 00:05:30.340 "bdev_zone_block_delete", 00:05:30.340 "bdev_zone_block_create", 00:05:30.340 "blobfs_create", 00:05:30.340 "blobfs_detect", 00:05:30.340 "blobfs_set_cache_size", 00:05:30.340 "bdev_aio_delete", 00:05:30.340 "bdev_aio_rescan", 00:05:30.340 "bdev_aio_create", 00:05:30.340 "bdev_ftl_set_property", 00:05:30.340 "bdev_ftl_get_properties", 00:05:30.340 "bdev_ftl_get_stats", 00:05:30.340 "bdev_ftl_unmap", 00:05:30.340 "bdev_ftl_unload", 00:05:30.340 "bdev_ftl_delete", 00:05:30.340 "bdev_ftl_load", 00:05:30.340 "bdev_ftl_create", 00:05:30.340 "bdev_virtio_attach_controller", 00:05:30.340 "bdev_virtio_scsi_get_devices", 00:05:30.340 "bdev_virtio_detach_controller", 00:05:30.340 "bdev_virtio_blk_set_hotplug", 00:05:30.340 "bdev_iscsi_delete", 00:05:30.340 "bdev_iscsi_create", 00:05:30.340 "bdev_iscsi_set_options", 00:05:30.340 "accel_error_inject_error", 00:05:30.340 "ioat_scan_accel_module", 00:05:30.340 "dsa_scan_accel_module", 00:05:30.340 "iaa_scan_accel_module", 00:05:30.340 "vfu_virtio_create_fs_endpoint", 00:05:30.340 "vfu_virtio_create_scsi_endpoint", 00:05:30.340 "vfu_virtio_scsi_remove_target", 00:05:30.340 "vfu_virtio_scsi_add_target", 00:05:30.340 "vfu_virtio_create_blk_endpoint", 00:05:30.340 "vfu_virtio_delete_endpoint", 00:05:30.340 "keyring_file_remove_key", 00:05:30.340 "keyring_file_add_key", 00:05:30.340 "keyring_linux_set_options", 00:05:30.340 "fsdev_aio_delete", 00:05:30.340 "fsdev_aio_create", 00:05:30.340 "iscsi_get_histogram", 00:05:30.340 "iscsi_enable_histogram", 00:05:30.340 "iscsi_set_options", 00:05:30.340 "iscsi_get_auth_groups", 00:05:30.340 "iscsi_auth_group_remove_secret", 00:05:30.340 "iscsi_auth_group_add_secret", 00:05:30.340 "iscsi_delete_auth_group", 00:05:30.340 "iscsi_create_auth_group", 00:05:30.340 "iscsi_set_discovery_auth", 00:05:30.340 "iscsi_get_options", 00:05:30.340 "iscsi_target_node_request_logout", 00:05:30.340 "iscsi_target_node_set_redirect", 00:05:30.340 "iscsi_target_node_set_auth", 00:05:30.340 "iscsi_target_node_add_lun", 00:05:30.340 "iscsi_get_stats", 00:05:30.340 "iscsi_get_connections", 00:05:30.340 "iscsi_portal_group_set_auth", 00:05:30.340 "iscsi_start_portal_group", 00:05:30.340 "iscsi_delete_portal_group", 00:05:30.340 "iscsi_create_portal_group", 00:05:30.340 "iscsi_get_portal_groups", 00:05:30.340 "iscsi_delete_target_node", 00:05:30.340 "iscsi_target_node_remove_pg_ig_maps", 00:05:30.340 "iscsi_target_node_add_pg_ig_maps", 00:05:30.340 "iscsi_create_target_node", 00:05:30.340 "iscsi_get_target_nodes", 00:05:30.340 "iscsi_delete_initiator_group", 00:05:30.340 "iscsi_initiator_group_remove_initiators", 00:05:30.340 "iscsi_initiator_group_add_initiators", 00:05:30.340 "iscsi_create_initiator_group", 00:05:30.340 "iscsi_get_initiator_groups", 00:05:30.340 "nvmf_set_crdt", 00:05:30.340 "nvmf_set_config", 00:05:30.340 "nvmf_set_max_subsystems", 00:05:30.340 "nvmf_stop_mdns_prr", 00:05:30.340 "nvmf_publish_mdns_prr", 00:05:30.340 "nvmf_subsystem_get_listeners", 00:05:30.340 "nvmf_subsystem_get_qpairs", 00:05:30.340 "nvmf_subsystem_get_controllers", 00:05:30.340 "nvmf_get_stats", 00:05:30.340 "nvmf_get_transports", 00:05:30.340 "nvmf_create_transport", 00:05:30.340 "nvmf_get_targets", 00:05:30.340 "nvmf_delete_target", 00:05:30.340 "nvmf_create_target", 00:05:30.340 "nvmf_subsystem_allow_any_host", 00:05:30.340 "nvmf_subsystem_set_keys", 00:05:30.340 "nvmf_subsystem_remove_host", 00:05:30.340 "nvmf_subsystem_add_host", 00:05:30.340 "nvmf_ns_remove_host", 00:05:30.340 "nvmf_ns_add_host", 00:05:30.340 "nvmf_subsystem_remove_ns", 00:05:30.340 "nvmf_subsystem_set_ns_ana_group", 00:05:30.340 "nvmf_subsystem_add_ns", 00:05:30.340 "nvmf_subsystem_listener_set_ana_state", 00:05:30.340 "nvmf_discovery_get_referrals", 00:05:30.340 "nvmf_discovery_remove_referral", 00:05:30.340 "nvmf_discovery_add_referral", 00:05:30.340 "nvmf_subsystem_remove_listener", 00:05:30.340 "nvmf_subsystem_add_listener", 00:05:30.341 "nvmf_delete_subsystem", 00:05:30.341 "nvmf_create_subsystem", 00:05:30.341 "nvmf_get_subsystems", 00:05:30.341 "env_dpdk_get_mem_stats", 00:05:30.341 "nbd_get_disks", 00:05:30.341 "nbd_stop_disk", 00:05:30.341 "nbd_start_disk", 00:05:30.341 "ublk_recover_disk", 00:05:30.341 "ublk_get_disks", 00:05:30.341 "ublk_stop_disk", 00:05:30.341 "ublk_start_disk", 00:05:30.341 "ublk_destroy_target", 00:05:30.341 "ublk_create_target", 00:05:30.341 "virtio_blk_create_transport", 00:05:30.341 "virtio_blk_get_transports", 00:05:30.341 "vhost_controller_set_coalescing", 00:05:30.341 "vhost_get_controllers", 00:05:30.341 "vhost_delete_controller", 00:05:30.341 "vhost_create_blk_controller", 00:05:30.341 "vhost_scsi_controller_remove_target", 00:05:30.341 "vhost_scsi_controller_add_target", 00:05:30.341 "vhost_start_scsi_controller", 00:05:30.341 "vhost_create_scsi_controller", 00:05:30.341 "thread_set_cpumask", 00:05:30.341 "scheduler_set_options", 00:05:30.341 "framework_get_governor", 00:05:30.341 "framework_get_scheduler", 00:05:30.341 "framework_set_scheduler", 00:05:30.341 "framework_get_reactors", 00:05:30.341 "thread_get_io_channels", 00:05:30.341 "thread_get_pollers", 00:05:30.341 "thread_get_stats", 00:05:30.341 "framework_monitor_context_switch", 00:05:30.341 "spdk_kill_instance", 00:05:30.341 "log_enable_timestamps", 00:05:30.341 "log_get_flags", 00:05:30.341 "log_clear_flag", 00:05:30.341 "log_set_flag", 00:05:30.341 "log_get_level", 00:05:30.341 "log_set_level", 00:05:30.341 "log_get_print_level", 00:05:30.341 "log_set_print_level", 00:05:30.341 "framework_enable_cpumask_locks", 00:05:30.341 "framework_disable_cpumask_locks", 00:05:30.341 "framework_wait_init", 00:05:30.341 "framework_start_init", 00:05:30.341 "scsi_get_devices", 00:05:30.341 "bdev_get_histogram", 00:05:30.341 "bdev_enable_histogram", 00:05:30.341 "bdev_set_qos_limit", 00:05:30.341 "bdev_set_qd_sampling_period", 00:05:30.341 "bdev_get_bdevs", 00:05:30.341 "bdev_reset_iostat", 00:05:30.341 "bdev_get_iostat", 00:05:30.341 "bdev_examine", 00:05:30.341 "bdev_wait_for_examine", 00:05:30.341 "bdev_set_options", 00:05:30.341 "accel_get_stats", 00:05:30.341 "accel_set_options", 00:05:30.341 "accel_set_driver", 00:05:30.341 "accel_crypto_key_destroy", 00:05:30.341 "accel_crypto_keys_get", 00:05:30.341 "accel_crypto_key_create", 00:05:30.341 "accel_assign_opc", 00:05:30.341 "accel_get_module_info", 00:05:30.341 "accel_get_opc_assignments", 00:05:30.341 "vmd_rescan", 00:05:30.341 "vmd_remove_device", 00:05:30.341 "vmd_enable", 00:05:30.341 "sock_get_default_impl", 00:05:30.341 "sock_set_default_impl", 00:05:30.341 "sock_impl_set_options", 00:05:30.341 "sock_impl_get_options", 00:05:30.341 "iobuf_get_stats", 00:05:30.341 "iobuf_set_options", 00:05:30.341 "keyring_get_keys", 00:05:30.341 "vfu_tgt_set_base_path", 00:05:30.341 "framework_get_pci_devices", 00:05:30.341 "framework_get_config", 00:05:30.341 "framework_get_subsystems", 00:05:30.341 "fsdev_set_opts", 00:05:30.341 "fsdev_get_opts", 00:05:30.341 "trace_get_info", 00:05:30.341 "trace_get_tpoint_group_mask", 00:05:30.341 "trace_disable_tpoint_group", 00:05:30.341 "trace_enable_tpoint_group", 00:05:30.341 "trace_clear_tpoint_mask", 00:05:30.341 "trace_set_tpoint_mask", 00:05:30.341 "notify_get_notifications", 00:05:30.341 "notify_get_types", 00:05:30.341 "spdk_get_version", 00:05:30.341 "rpc_get_methods" 00:05:30.341 ] 00:05:30.341 05:27:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:30.341 05:27:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.341 05:27:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.341 05:27:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:30.341 05:27:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1573450 00:05:30.341 05:27:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1573450 ']' 00:05:30.341 05:27:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1573450 00:05:30.341 05:27:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:30.341 05:27:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.341 05:27:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573450 00:05:30.600 05:27:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.600 05:27:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.600 05:27:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573450' 00:05:30.600 killing process with pid 1573450 00:05:30.600 05:27:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1573450 00:05:30.600 05:27:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1573450 00:05:30.860 00:05:30.860 real 0m1.150s 00:05:30.860 user 0m1.921s 00:05:30.860 sys 0m0.448s 00:05:30.860 05:27:18 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.860 05:27:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 ************************************ 00:05:30.860 END TEST spdkcli_tcp 00:05:30.860 ************************************ 00:05:30.860 05:27:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.860 05:27:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.860 05:27:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.860 05:27:18 -- common/autotest_common.sh@10 -- # set +x 00:05:30.860 ************************************ 00:05:30.860 START TEST dpdk_mem_utility 00:05:30.860 ************************************ 00:05:30.860 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.860 * Looking for test storage... 00:05:30.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:30.860 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.860 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.860 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.120 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.120 05:27:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:31.120 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.120 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.120 --rc genhtml_branch_coverage=1 00:05:31.120 --rc genhtml_function_coverage=1 00:05:31.120 --rc genhtml_legend=1 00:05:31.120 --rc geninfo_all_blocks=1 00:05:31.120 --rc geninfo_unexecuted_blocks=1 00:05:31.120 00:05:31.120 ' 00:05:31.120 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.120 --rc genhtml_branch_coverage=1 00:05:31.120 --rc genhtml_function_coverage=1 00:05:31.120 --rc genhtml_legend=1 00:05:31.120 --rc geninfo_all_blocks=1 00:05:31.120 --rc geninfo_unexecuted_blocks=1 00:05:31.120 00:05:31.120 ' 00:05:31.120 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.120 --rc genhtml_branch_coverage=1 00:05:31.120 --rc genhtml_function_coverage=1 00:05:31.120 --rc genhtml_legend=1 00:05:31.120 --rc geninfo_all_blocks=1 00:05:31.120 --rc geninfo_unexecuted_blocks=1 00:05:31.120 00:05:31.120 ' 00:05:31.120 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.120 --rc genhtml_branch_coverage=1 00:05:31.120 --rc genhtml_function_coverage=1 00:05:31.120 --rc genhtml_legend=1 00:05:31.120 --rc geninfo_all_blocks=1 00:05:31.120 --rc geninfo_unexecuted_blocks=1 00:05:31.120 00:05:31.120 ' 00:05:31.121 05:27:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.121 05:27:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1573652 00:05:31.121 05:27:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.121 05:27:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1573652 00:05:31.121 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1573652 ']' 00:05:31.121 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.121 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.121 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.121 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.121 05:27:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.121 [2024-11-27 05:27:18.977677] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:31.121 [2024-11-27 05:27:18.977731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573652 ] 00:05:31.121 [2024-11-27 05:27:19.053102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.121 [2024-11-27 05:27:19.095490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.387 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.387 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:31.387 05:27:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:31.387 05:27:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:31.387 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.387 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.387 { 00:05:31.387 "filename": "/tmp/spdk_mem_dump.txt" 00:05:31.387 } 00:05:31.387 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.387 05:27:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:31.387 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:31.387 1 heaps totaling size 818.000000 MiB 00:05:31.387 size: 818.000000 MiB heap id: 0 00:05:31.387 end heaps---------- 00:05:31.387 9 mempools totaling size 603.782043 MiB 00:05:31.387 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:31.387 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:31.387 size: 100.555481 MiB name: bdev_io_1573652 00:05:31.387 size: 50.003479 MiB name: msgpool_1573652 00:05:31.387 size: 36.509338 MiB name: fsdev_io_1573652 00:05:31.387 size: 21.763794 MiB name: PDU_Pool 00:05:31.387 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:31.387 size: 4.133484 MiB name: evtpool_1573652 00:05:31.387 size: 0.026123 MiB name: Session_Pool 00:05:31.387 end mempools------- 00:05:31.387 6 memzones totaling size 4.142822 MiB 00:05:31.387 size: 1.000366 MiB name: RG_ring_0_1573652 00:05:31.387 size: 1.000366 MiB name: RG_ring_1_1573652 00:05:31.387 size: 1.000366 MiB name: RG_ring_4_1573652 00:05:31.387 size: 1.000366 MiB name: RG_ring_5_1573652 00:05:31.387 size: 0.125366 MiB name: RG_ring_2_1573652 00:05:31.387 size: 0.015991 MiB name: RG_ring_3_1573652 00:05:31.387 end memzones------- 00:05:31.387 05:27:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:31.647 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:31.647 list of free elements. size: 10.852478 MiB 00:05:31.647 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:31.647 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:31.647 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:31.647 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:31.647 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:31.647 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:31.647 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:31.647 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:31.647 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:31.647 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:31.647 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:31.647 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:31.647 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:31.647 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:31.647 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:31.647 list of standard malloc elements. size: 199.218628 MiB 00:05:31.647 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:31.647 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:31.647 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:31.647 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:31.647 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:31.647 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:31.647 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:31.647 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:31.647 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:31.647 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:31.648 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:31.648 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:31.648 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:31.648 list of memzone associated elements. size: 607.928894 MiB 00:05:31.648 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:31.648 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:31.648 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:31.648 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:31.648 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:31.648 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1573652_0 00:05:31.648 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:31.648 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1573652_0 00:05:31.648 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:31.648 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1573652_0 00:05:31.648 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:31.648 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:31.648 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:31.648 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:31.648 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:31.648 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1573652_0 00:05:31.648 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:31.648 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1573652 00:05:31.648 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:31.648 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1573652 00:05:31.648 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:31.648 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:31.648 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:31.648 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:31.648 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:31.648 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:31.648 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:31.648 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:31.648 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:31.648 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1573652 00:05:31.648 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:31.648 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1573652 00:05:31.648 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:31.648 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1573652 00:05:31.648 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:31.648 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1573652 00:05:31.648 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:31.648 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1573652 00:05:31.648 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:31.648 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1573652 00:05:31.648 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:31.648 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:31.648 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:31.648 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:31.648 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:31.648 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:31.648 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:31.648 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1573652 00:05:31.648 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:31.648 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1573652 00:05:31.648 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:31.648 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:31.648 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:31.648 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:31.648 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:31.648 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1573652 00:05:31.648 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:31.648 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:31.648 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:31.648 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1573652 00:05:31.648 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:31.648 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1573652 00:05:31.648 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:31.648 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1573652 00:05:31.648 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:31.648 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:31.648 05:27:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:31.648 05:27:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1573652 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1573652 ']' 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1573652 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573652 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573652' 00:05:31.648 killing process with pid 1573652 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1573652 00:05:31.648 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1573652 00:05:31.907 00:05:31.907 real 0m1.022s 00:05:31.907 user 0m0.948s 00:05:31.907 sys 0m0.412s 00:05:31.907 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.907 05:27:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.907 ************************************ 00:05:31.907 END TEST dpdk_mem_utility 00:05:31.907 ************************************ 00:05:31.907 05:27:19 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.907 05:27:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.907 05:27:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.907 05:27:19 -- common/autotest_common.sh@10 -- # set +x 00:05:31.907 ************************************ 00:05:31.907 START TEST event 00:05:31.907 ************************************ 00:05:31.907 05:27:19 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:32.167 * Looking for test storage... 00:05:32.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:32.167 05:27:19 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.167 05:27:19 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.167 05:27:19 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.167 05:27:19 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.167 05:27:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.167 05:27:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.167 05:27:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.167 05:27:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.167 05:27:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.167 05:27:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.167 05:27:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.167 05:27:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.167 05:27:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.167 05:27:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.167 05:27:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.167 05:27:19 event -- scripts/common.sh@344 -- # case "$op" in 00:05:32.167 05:27:19 event -- scripts/common.sh@345 -- # : 1 00:05:32.167 05:27:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.167 05:27:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.167 05:27:19 event -- scripts/common.sh@365 -- # decimal 1 00:05:32.167 05:27:20 event -- scripts/common.sh@353 -- # local d=1 00:05:32.167 05:27:20 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.167 05:27:20 event -- scripts/common.sh@355 -- # echo 1 00:05:32.167 05:27:20 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.167 05:27:20 event -- scripts/common.sh@366 -- # decimal 2 00:05:32.167 05:27:20 event -- scripts/common.sh@353 -- # local d=2 00:05:32.167 05:27:20 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.167 05:27:20 event -- scripts/common.sh@355 -- # echo 2 00:05:32.167 05:27:20 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.167 05:27:20 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.167 05:27:20 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.167 05:27:20 event -- scripts/common.sh@368 -- # return 0 00:05:32.167 05:27:20 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.167 05:27:20 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.167 --rc genhtml_branch_coverage=1 00:05:32.167 --rc genhtml_function_coverage=1 00:05:32.167 --rc genhtml_legend=1 00:05:32.167 --rc geninfo_all_blocks=1 00:05:32.167 --rc geninfo_unexecuted_blocks=1 00:05:32.167 00:05:32.167 ' 00:05:32.167 05:27:20 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.167 --rc genhtml_branch_coverage=1 00:05:32.167 --rc genhtml_function_coverage=1 00:05:32.167 --rc genhtml_legend=1 00:05:32.167 --rc geninfo_all_blocks=1 00:05:32.167 --rc geninfo_unexecuted_blocks=1 00:05:32.167 00:05:32.167 ' 00:05:32.167 05:27:20 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.167 --rc genhtml_branch_coverage=1 00:05:32.167 --rc genhtml_function_coverage=1 00:05:32.167 --rc genhtml_legend=1 00:05:32.167 --rc geninfo_all_blocks=1 00:05:32.167 --rc geninfo_unexecuted_blocks=1 00:05:32.167 00:05:32.167 ' 00:05:32.167 05:27:20 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.167 --rc genhtml_branch_coverage=1 00:05:32.167 --rc genhtml_function_coverage=1 00:05:32.167 --rc genhtml_legend=1 00:05:32.167 --rc geninfo_all_blocks=1 00:05:32.167 --rc geninfo_unexecuted_blocks=1 00:05:32.167 00:05:32.167 ' 00:05:32.167 05:27:20 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:32.167 05:27:20 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:32.167 05:27:20 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.167 05:27:20 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:32.167 05:27:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.167 05:27:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.167 ************************************ 00:05:32.167 START TEST event_perf 00:05:32.167 ************************************ 00:05:32.167 05:27:20 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:32.167 Running I/O for 1 seconds...[2024-11-27 05:27:20.071799] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:32.167 [2024-11-27 05:27:20.071866] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573909 ] 00:05:32.167 [2024-11-27 05:27:20.149955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.426 [2024-11-27 05:27:20.193865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.426 [2024-11-27 05:27:20.193975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.426 [2024-11-27 05:27:20.194058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.426 [2024-11-27 05:27:20.194058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.365 Running I/O for 1 seconds... 00:05:33.365 lcore 0: 202563 00:05:33.365 lcore 1: 202561 00:05:33.365 lcore 2: 202562 00:05:33.365 lcore 3: 202563 00:05:33.365 done. 00:05:33.365 00:05:33.365 real 0m1.184s 00:05:33.365 user 0m4.110s 00:05:33.365 sys 0m0.071s 00:05:33.365 05:27:21 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.365 05:27:21 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.365 ************************************ 00:05:33.365 END TEST event_perf 00:05:33.365 ************************************ 00:05:33.365 05:27:21 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.365 05:27:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:33.365 05:27:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.365 05:27:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.365 ************************************ 00:05:33.365 START TEST event_reactor 00:05:33.365 ************************************ 00:05:33.365 05:27:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:33.365 [2024-11-27 05:27:21.323847] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:33.365 [2024-11-27 05:27:21.323923] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574161 ] 00:05:33.623 [2024-11-27 05:27:21.401601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.623 [2024-11-27 05:27:21.440047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.562 test_start 00:05:34.562 oneshot 00:05:34.562 tick 100 00:05:34.562 tick 100 00:05:34.562 tick 250 00:05:34.562 tick 100 00:05:34.562 tick 100 00:05:34.562 tick 100 00:05:34.562 tick 250 00:05:34.562 tick 500 00:05:34.562 tick 100 00:05:34.562 tick 100 00:05:34.562 tick 250 00:05:34.562 tick 100 00:05:34.562 tick 100 00:05:34.562 test_end 00:05:34.562 00:05:34.562 real 0m1.173s 00:05:34.562 user 0m1.098s 00:05:34.562 sys 0m0.071s 00:05:34.562 05:27:22 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.562 05:27:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.562 ************************************ 00:05:34.562 END TEST event_reactor 00:05:34.562 ************************************ 00:05:34.562 05:27:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.562 05:27:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:34.562 05:27:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.562 05:27:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.562 ************************************ 00:05:34.562 START TEST event_reactor_perf 00:05:34.562 ************************************ 00:05:34.562 05:27:22 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.821 [2024-11-27 05:27:22.566632] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:34.821 [2024-11-27 05:27:22.566714] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574409 ] 00:05:34.821 [2024-11-27 05:27:22.646124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.821 [2024-11-27 05:27:22.684073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.759 test_start 00:05:35.759 test_end 00:05:35.759 Performance: 509534 events per second 00:05:35.759 00:05:35.759 real 0m1.177s 00:05:35.759 user 0m1.100s 00:05:35.759 sys 0m0.073s 00:05:35.759 05:27:23 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.759 05:27:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.759 ************************************ 00:05:35.759 END TEST event_reactor_perf 00:05:35.759 ************************************ 00:05:35.759 05:27:23 event -- event/event.sh@49 -- # uname -s 00:05:35.759 05:27:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.759 05:27:23 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.759 05:27:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.759 05:27:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.759 05:27:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.020 ************************************ 00:05:36.020 START TEST event_scheduler 00:05:36.020 ************************************ 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:36.020 * Looking for test storage... 00:05:36.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.020 05:27:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:36.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.020 --rc genhtml_branch_coverage=1 00:05:36.020 --rc genhtml_function_coverage=1 00:05:36.020 --rc genhtml_legend=1 00:05:36.020 --rc geninfo_all_blocks=1 00:05:36.020 --rc geninfo_unexecuted_blocks=1 00:05:36.020 00:05:36.020 ' 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:36.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.020 --rc genhtml_branch_coverage=1 00:05:36.020 --rc genhtml_function_coverage=1 00:05:36.020 --rc genhtml_legend=1 00:05:36.020 --rc geninfo_all_blocks=1 00:05:36.020 --rc geninfo_unexecuted_blocks=1 00:05:36.020 00:05:36.020 ' 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:36.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.020 --rc genhtml_branch_coverage=1 00:05:36.020 --rc genhtml_function_coverage=1 00:05:36.020 --rc genhtml_legend=1 00:05:36.020 --rc geninfo_all_blocks=1 00:05:36.020 --rc geninfo_unexecuted_blocks=1 00:05:36.020 00:05:36.020 ' 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:36.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.020 --rc genhtml_branch_coverage=1 00:05:36.020 --rc genhtml_function_coverage=1 00:05:36.020 --rc genhtml_legend=1 00:05:36.020 --rc geninfo_all_blocks=1 00:05:36.020 --rc geninfo_unexecuted_blocks=1 00:05:36.020 00:05:36.020 ' 00:05:36.020 05:27:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.020 05:27:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1574696 00:05:36.020 05:27:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.020 05:27:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.020 05:27:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1574696 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1574696 ']' 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.020 05:27:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.020 [2024-11-27 05:27:24.016216] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:36.020 [2024-11-27 05:27:24.016264] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574696 ] 00:05:36.280 [2024-11-27 05:27:24.074139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.280 [2024-11-27 05:27:24.119781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.280 [2024-11-27 05:27:24.119889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.280 [2024-11-27 05:27:24.119996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.280 [2024-11-27 05:27:24.119997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:36.280 05:27:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.280 [2024-11-27 05:27:24.164484] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:36.280 [2024-11-27 05:27:24.164500] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:36.280 [2024-11-27 05:27:24.164509] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:36.280 [2024-11-27 05:27:24.164515] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:36.280 [2024-11-27 05:27:24.164520] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.280 05:27:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.280 [2024-11-27 05:27:24.239121] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.280 05:27:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.280 05:27:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.280 ************************************ 00:05:36.280 START TEST scheduler_create_thread 00:05:36.280 ************************************ 00:05:36.280 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:36.280 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:36.280 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.280 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.539 2 00:05:36.539 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 3 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 4 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 5 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 6 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 7 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 8 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 9 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 10 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.540 05:27:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.919 05:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.919 05:27:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.919 05:27:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.919 05:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.919 05:27:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.299 05:27:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.299 00:05:39.299 real 0m2.619s 00:05:39.299 user 0m0.023s 00:05:39.299 sys 0m0.007s 00:05:39.299 05:27:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.299 05:27:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.299 ************************************ 00:05:39.299 END TEST scheduler_create_thread 00:05:39.299 ************************************ 00:05:39.299 05:27:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.299 05:27:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1574696 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1574696 ']' 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1574696 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574696 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574696' 00:05:39.299 killing process with pid 1574696 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1574696 00:05:39.299 05:27:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1574696 00:05:39.558 [2024-11-27 05:27:27.373190] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:39.558 00:05:39.558 real 0m3.747s 00:05:39.558 user 0m5.606s 00:05:39.558 sys 0m0.367s 00:05:39.558 05:27:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.558 05:27:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.558 ************************************ 00:05:39.558 END TEST event_scheduler 00:05:39.558 ************************************ 00:05:39.818 05:27:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:39.818 05:27:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:39.818 05:27:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.818 05:27:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.818 05:27:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.818 ************************************ 00:05:39.818 START TEST app_repeat 00:05:39.818 ************************************ 00:05:39.818 05:27:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1575427 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1575427' 00:05:39.818 Process app_repeat pid: 1575427 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:39.818 spdk_app_start Round 0 00:05:39.818 05:27:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1575427 /var/tmp/spdk-nbd.sock 00:05:39.818 05:27:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1575427 ']' 00:05:39.818 05:27:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.818 05:27:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.818 05:27:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.818 05:27:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.818 05:27:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.818 [2024-11-27 05:27:27.654391] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:39.818 [2024-11-27 05:27:27.654443] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575427 ] 00:05:39.818 [2024-11-27 05:27:27.732088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.818 [2024-11-27 05:27:27.772367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.818 [2024-11-27 05:27:27.772369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.077 05:27:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.077 05:27:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.077 05:27:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.077 Malloc0 00:05:40.077 05:27:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.336 Malloc1 00:05:40.336 05:27:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.336 05:27:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.595 /dev/nbd0 00:05:40.595 05:27:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.595 05:27:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.595 05:27:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.595 1+0 records in 00:05:40.595 1+0 records out 00:05:40.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186494 s, 22.0 MB/s 00:05:40.596 05:27:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.596 05:27:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.596 05:27:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.596 05:27:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.596 05:27:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.596 05:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.596 05:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.596 05:27:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.855 /dev/nbd1 00:05:40.855 05:27:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.855 05:27:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.855 1+0 records in 00:05:40.855 1+0 records out 00:05:40.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218611 s, 18.7 MB/s 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.855 05:27:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.855 05:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.855 05:27:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.855 05:27:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.855 05:27:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.855 05:27:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.114 05:27:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.114 { 00:05:41.114 "nbd_device": "/dev/nbd0", 00:05:41.114 "bdev_name": "Malloc0" 00:05:41.114 }, 00:05:41.114 { 00:05:41.114 "nbd_device": "/dev/nbd1", 00:05:41.114 "bdev_name": "Malloc1" 00:05:41.114 } 00:05:41.114 ]' 00:05:41.114 05:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.114 { 00:05:41.114 "nbd_device": "/dev/nbd0", 00:05:41.114 "bdev_name": "Malloc0" 00:05:41.114 }, 00:05:41.114 { 00:05:41.114 "nbd_device": "/dev/nbd1", 00:05:41.114 "bdev_name": "Malloc1" 00:05:41.114 } 00:05:41.114 ]' 00:05:41.114 05:27:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.114 /dev/nbd1' 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.114 /dev/nbd1' 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.114 256+0 records in 00:05:41.114 256+0 records out 00:05:41.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101311 s, 104 MB/s 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.114 256+0 records in 00:05:41.114 256+0 records out 00:05:41.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145027 s, 72.3 MB/s 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.114 256+0 records in 00:05:41.114 256+0 records out 00:05:41.114 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014473 s, 72.5 MB/s 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.114 05:27:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.115 05:27:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.373 05:27:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.632 05:27:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.891 05:27:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.891 05:27:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.150 05:27:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.150 [2024-11-27 05:27:30.112408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.150 [2024-11-27 05:27:30.151313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.150 [2024-11-27 05:27:30.151314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.409 [2024-11-27 05:27:30.191979] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.409 [2024-11-27 05:27:30.192030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.699 05:27:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.699 05:27:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.699 spdk_app_start Round 1 00:05:45.699 05:27:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1575427 /var/tmp/spdk-nbd.sock 00:05:45.699 05:27:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1575427 ']' 00:05:45.699 05:27:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.699 05:27:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.699 05:27:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.699 05:27:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.699 05:27:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.699 05:27:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.699 05:27:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.699 05:27:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.699 Malloc0 00:05:45.699 05:27:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.699 Malloc1 00:05:45.699 05:27:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.699 05:27:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.960 /dev/nbd0 00:05:45.960 05:27:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.960 05:27:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.960 1+0 records in 00:05:45.960 1+0 records out 00:05:45.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185035 s, 22.1 MB/s 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.960 05:27:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.960 05:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.960 05:27:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.960 05:27:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.219 /dev/nbd1 00:05:46.219 05:27:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.219 05:27:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.219 1+0 records in 00:05:46.219 1+0 records out 00:05:46.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256325 s, 16.0 MB/s 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.219 05:27:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.219 05:27:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.219 05:27:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.219 05:27:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.219 05:27:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.219 05:27:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.478 { 00:05:46.478 "nbd_device": "/dev/nbd0", 00:05:46.478 "bdev_name": "Malloc0" 00:05:46.478 }, 00:05:46.478 { 00:05:46.478 "nbd_device": "/dev/nbd1", 00:05:46.478 "bdev_name": "Malloc1" 00:05:46.478 } 00:05:46.478 ]' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.478 { 00:05:46.478 "nbd_device": "/dev/nbd0", 00:05:46.478 "bdev_name": "Malloc0" 00:05:46.478 }, 00:05:46.478 { 00:05:46.478 "nbd_device": "/dev/nbd1", 00:05:46.478 "bdev_name": "Malloc1" 00:05:46.478 } 00:05:46.478 ]' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.478 /dev/nbd1' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.478 /dev/nbd1' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.478 256+0 records in 00:05:46.478 256+0 records out 00:05:46.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010006 s, 105 MB/s 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.478 256+0 records in 00:05:46.478 256+0 records out 00:05:46.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140383 s, 74.7 MB/s 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.478 256+0 records in 00:05:46.478 256+0 records out 00:05:46.478 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147599 s, 71.0 MB/s 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.478 05:27:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.738 05:27:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.997 05:27:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.257 05:27:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.257 05:27:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.517 05:27:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.517 [2024-11-27 05:27:35.424435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.517 [2024-11-27 05:27:35.460785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.517 [2024-11-27 05:27:35.460785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.517 [2024-11-27 05:27:35.502293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.517 [2024-11-27 05:27:35.502336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.808 05:27:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.808 05:27:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.808 spdk_app_start Round 2 00:05:50.808 05:27:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1575427 /var/tmp/spdk-nbd.sock 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1575427 ']' 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.808 05:27:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.808 05:27:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.808 Malloc0 00:05:50.808 05:27:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.067 Malloc1 00:05:51.067 05:27:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.067 05:27:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.326 /dev/nbd0 00:05:51.326 05:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.326 05:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.326 1+0 records in 00:05:51.326 1+0 records out 00:05:51.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233513 s, 17.5 MB/s 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.326 05:27:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.326 05:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.326 05:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.326 05:27:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.584 /dev/nbd1 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.584 1+0 records in 00:05:51.584 1+0 records out 00:05:51.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236359 s, 17.3 MB/s 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.584 05:27:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.584 05:27:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.584 { 00:05:51.584 "nbd_device": "/dev/nbd0", 00:05:51.584 "bdev_name": "Malloc0" 00:05:51.584 }, 00:05:51.584 { 00:05:51.584 "nbd_device": "/dev/nbd1", 00:05:51.584 "bdev_name": "Malloc1" 00:05:51.584 } 00:05:51.584 ]' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.843 { 00:05:51.843 "nbd_device": "/dev/nbd0", 00:05:51.843 "bdev_name": "Malloc0" 00:05:51.843 }, 00:05:51.843 { 00:05:51.843 "nbd_device": "/dev/nbd1", 00:05:51.843 "bdev_name": "Malloc1" 00:05:51.843 } 00:05:51.843 ]' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.843 /dev/nbd1' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.843 /dev/nbd1' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.843 256+0 records in 00:05:51.843 256+0 records out 00:05:51.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010674 s, 98.2 MB/s 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.843 256+0 records in 00:05:51.843 256+0 records out 00:05:51.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131869 s, 79.5 MB/s 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.843 256+0 records in 00:05:51.843 256+0 records out 00:05:51.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141861 s, 73.9 MB/s 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.843 05:27:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.103 05:27:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.362 05:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.622 05:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.622 05:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.622 05:27:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.622 05:27:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.622 05:27:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.622 05:27:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.622 05:27:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.622 05:27:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.882 [2024-11-27 05:27:40.729614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.882 [2024-11-27 05:27:40.766133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.882 [2024-11-27 05:27:40.766134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.882 [2024-11-27 05:27:40.806243] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.882 [2024-11-27 05:27:40.806284] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.174 05:27:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1575427 /var/tmp/spdk-nbd.sock 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1575427 ']' 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.174 05:27:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:56.174 05:27:43 event.app_repeat -- event/event.sh@39 -- # killprocess 1575427 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1575427 ']' 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1575427 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1575427 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1575427' 00:05:56.175 killing process with pid 1575427 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1575427 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1575427 00:05:56.175 spdk_app_start is called in Round 0. 00:05:56.175 Shutdown signal received, stop current app iteration 00:05:56.175 Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 reinitialization... 00:05:56.175 spdk_app_start is called in Round 1. 00:05:56.175 Shutdown signal received, stop current app iteration 00:05:56.175 Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 reinitialization... 00:05:56.175 spdk_app_start is called in Round 2. 00:05:56.175 Shutdown signal received, stop current app iteration 00:05:56.175 Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 reinitialization... 00:05:56.175 spdk_app_start is called in Round 3. 00:05:56.175 Shutdown signal received, stop current app iteration 00:05:56.175 05:27:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.175 05:27:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.175 00:05:56.175 real 0m16.369s 00:05:56.175 user 0m35.907s 00:05:56.175 sys 0m2.587s 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.175 05:27:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.175 ************************************ 00:05:56.175 END TEST app_repeat 00:05:56.175 ************************************ 00:05:56.175 05:27:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.175 05:27:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:56.175 05:27:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.175 05:27:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.175 05:27:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.175 ************************************ 00:05:56.175 START TEST cpu_locks 00:05:56.175 ************************************ 00:05:56.175 05:27:44 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:56.175 * Looking for test storage... 00:05:56.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:56.175 05:27:44 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.175 05:27:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.175 05:27:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.434 05:27:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.434 05:27:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.435 05:27:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.435 05:27:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.435 05:27:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.435 05:27:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.435 --rc genhtml_branch_coverage=1 00:05:56.435 --rc genhtml_function_coverage=1 00:05:56.435 --rc genhtml_legend=1 00:05:56.435 --rc geninfo_all_blocks=1 00:05:56.435 --rc geninfo_unexecuted_blocks=1 00:05:56.435 00:05:56.435 ' 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.435 --rc genhtml_branch_coverage=1 00:05:56.435 --rc genhtml_function_coverage=1 00:05:56.435 --rc genhtml_legend=1 00:05:56.435 --rc geninfo_all_blocks=1 00:05:56.435 --rc geninfo_unexecuted_blocks=1 00:05:56.435 00:05:56.435 ' 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.435 --rc genhtml_branch_coverage=1 00:05:56.435 --rc genhtml_function_coverage=1 00:05:56.435 --rc genhtml_legend=1 00:05:56.435 --rc geninfo_all_blocks=1 00:05:56.435 --rc geninfo_unexecuted_blocks=1 00:05:56.435 00:05:56.435 ' 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.435 --rc genhtml_branch_coverage=1 00:05:56.435 --rc genhtml_function_coverage=1 00:05:56.435 --rc genhtml_legend=1 00:05:56.435 --rc geninfo_all_blocks=1 00:05:56.435 --rc geninfo_unexecuted_blocks=1 00:05:56.435 00:05:56.435 ' 00:05:56.435 05:27:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.435 05:27:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.435 05:27:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.435 05:27:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.435 05:27:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.435 ************************************ 00:05:56.435 START TEST default_locks 00:05:56.435 ************************************ 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1578431 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1578431 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1578431 ']' 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.435 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.435 [2024-11-27 05:27:44.324158] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:56.435 [2024-11-27 05:27:44.324201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578431 ] 00:05:56.435 [2024-11-27 05:27:44.399140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.694 [2024-11-27 05:27:44.441160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.694 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.694 05:27:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:56.694 05:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1578431 00:05:56.694 05:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1578431 00:05:56.694 05:27:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.263 lslocks: write error 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1578431 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1578431 ']' 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1578431 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1578431 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1578431' 00:05:57.263 killing process with pid 1578431 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1578431 00:05:57.263 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1578431 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1578431 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1578431 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1578431 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1578431 ']' 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1578431) - No such process 00:05:57.523 ERROR: process (pid: 1578431) is no longer running 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.523 00:05:57.523 real 0m1.137s 00:05:57.523 user 0m1.076s 00:05:57.523 sys 0m0.534s 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.523 05:27:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.523 ************************************ 00:05:57.523 END TEST default_locks 00:05:57.523 ************************************ 00:05:57.523 05:27:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.523 05:27:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.523 05:27:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.523 05:27:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.523 ************************************ 00:05:57.523 START TEST default_locks_via_rpc 00:05:57.523 ************************************ 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1578690 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1578690 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1578690 ']' 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.523 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.523 [2024-11-27 05:27:45.525415] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:57.523 [2024-11-27 05:27:45.525457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578690 ] 00:05:57.782 [2024-11-27 05:27:45.596558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.782 [2024-11-27 05:27:45.638644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1578690 00:05:58.041 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1578690 00:05:58.042 05:27:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1578690 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1578690 ']' 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1578690 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1578690 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1578690' 00:05:58.301 killing process with pid 1578690 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1578690 00:05:58.301 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1578690 00:05:58.868 00:05:58.868 real 0m1.108s 00:05:58.868 user 0m1.050s 00:05:58.868 sys 0m0.507s 00:05:58.868 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.868 05:27:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.868 ************************************ 00:05:58.868 END TEST default_locks_via_rpc 00:05:58.868 ************************************ 00:05:58.868 05:27:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.868 05:27:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.868 05:27:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.868 05:27:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.868 ************************************ 00:05:58.868 START TEST non_locking_app_on_locked_coremask 00:05:58.868 ************************************ 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1578891 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1578891 /var/tmp/spdk.sock 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1578891 ']' 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.868 05:27:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.868 [2024-11-27 05:27:46.700716] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:58.868 [2024-11-27 05:27:46.700760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578891 ] 00:05:58.868 [2024-11-27 05:27:46.774314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.868 [2024-11-27 05:27:46.816133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1578947 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1578947 /var/tmp/spdk2.sock 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1578947 ']' 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.128 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.128 [2024-11-27 05:27:47.078238] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:05:59.128 [2024-11-27 05:27:47.078285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578947 ] 00:05:59.387 [2024-11-27 05:27:47.166071] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.387 [2024-11-27 05:27:47.166100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.387 [2024-11-27 05:27:47.254134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.956 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.956 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.956 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1578891 00:05:59.956 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.956 05:27:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1578891 00:06:00.523 lslocks: write error 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1578891 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1578891 ']' 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1578891 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1578891 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1578891' 00:06:00.523 killing process with pid 1578891 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1578891 00:06:00.523 05:27:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1578891 00:06:01.090 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1578947 00:06:01.090 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1578947 ']' 00:06:01.090 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1578947 00:06:01.090 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.090 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.090 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1578947 00:06:01.351 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.351 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.351 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1578947' 00:06:01.351 killing process with pid 1578947 00:06:01.351 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1578947 00:06:01.351 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1578947 00:06:01.613 00:06:01.613 real 0m2.774s 00:06:01.613 user 0m2.913s 00:06:01.613 sys 0m0.907s 00:06:01.613 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.613 05:27:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.613 ************************************ 00:06:01.613 END TEST non_locking_app_on_locked_coremask 00:06:01.613 ************************************ 00:06:01.613 05:27:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.613 05:27:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.613 05:27:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.613 05:27:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.613 ************************************ 00:06:01.613 START TEST locking_app_on_unlocked_coremask 00:06:01.613 ************************************ 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1579437 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1579437 /var/tmp/spdk.sock 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1579437 ']' 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.613 05:27:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.613 [2024-11-27 05:27:49.541570] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:01.613 [2024-11-27 05:27:49.541613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579437 ] 00:06:01.872 [2024-11-27 05:27:49.616840] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.872 [2024-11-27 05:27:49.616866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.872 [2024-11-27 05:27:49.658528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1579460 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1579460 /var/tmp/spdk2.sock 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1579460 ']' 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.440 05:27:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.440 [2024-11-27 05:27:50.415864] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:02.440 [2024-11-27 05:27:50.415911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579460 ] 00:06:02.699 [2024-11-27 05:27:50.504127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.699 [2024-11-27 05:27:50.592197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.267 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.267 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.267 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1579460 00:06:03.267 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1579460 00:06:03.267 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.835 lslocks: write error 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1579437 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1579437 ']' 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1579437 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1579437 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1579437' 00:06:03.835 killing process with pid 1579437 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1579437 00:06:03.835 05:27:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1579437 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1579460 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1579460 ']' 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1579460 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1579460 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1579460' 00:06:04.403 killing process with pid 1579460 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1579460 00:06:04.403 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1579460 00:06:04.662 00:06:04.662 real 0m3.050s 00:06:04.662 user 0m3.328s 00:06:04.662 sys 0m0.862s 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.662 ************************************ 00:06:04.662 END TEST locking_app_on_unlocked_coremask 00:06:04.662 ************************************ 00:06:04.662 05:27:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.662 05:27:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.662 05:27:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.662 05:27:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.662 ************************************ 00:06:04.662 START TEST locking_app_on_locked_coremask 00:06:04.662 ************************************ 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1579948 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1579948 /var/tmp/spdk.sock 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1579948 ']' 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.662 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.662 [2024-11-27 05:27:52.661109] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:04.662 [2024-11-27 05:27:52.661151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579948 ] 00:06:04.922 [2024-11-27 05:27:52.736720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.922 [2024-11-27 05:27:52.778398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1579954 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1579954 /var/tmp/spdk2.sock 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1579954 /var/tmp/spdk2.sock 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1579954 /var/tmp/spdk2.sock 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1579954 ']' 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.182 05:27:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 [2024-11-27 05:27:53.044267] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:05.182 [2024-11-27 05:27:53.044317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579954 ] 00:06:05.182 [2024-11-27 05:27:53.130003] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1579948 has claimed it. 00:06:05.182 [2024-11-27 05:27:53.130036] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1579954) - No such process 00:06:05.752 ERROR: process (pid: 1579954) is no longer running 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1579948 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1579948 00:06:05.752 05:27:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.320 lslocks: write error 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1579948 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1579948 ']' 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1579948 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1579948 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1579948' 00:06:06.320 killing process with pid 1579948 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1579948 00:06:06.320 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1579948 00:06:06.580 00:06:06.580 real 0m1.802s 00:06:06.580 user 0m1.926s 00:06:06.580 sys 0m0.607s 00:06:06.580 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.580 05:27:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.580 ************************************ 00:06:06.580 END TEST locking_app_on_locked_coremask 00:06:06.580 ************************************ 00:06:06.580 05:27:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.580 05:27:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.580 05:27:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.580 05:27:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.580 ************************************ 00:06:06.580 START TEST locking_overlapped_coremask 00:06:06.580 ************************************ 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1580218 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1580218 /var/tmp/spdk.sock 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1580218 ']' 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.580 05:27:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.580 [2024-11-27 05:27:54.533087] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:06.580 [2024-11-27 05:27:54.533132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580218 ] 00:06:06.839 [2024-11-27 05:27:54.608247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.839 [2024-11-27 05:27:54.653703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.839 [2024-11-27 05:27:54.653734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.839 [2024-11-27 05:27:54.653735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1580444 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1580444 /var/tmp/spdk2.sock 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1580444 /var/tmp/spdk2.sock 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1580444 /var/tmp/spdk2.sock 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1580444 ']' 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.407 05:27:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.666 [2024-11-27 05:27:55.416370] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:07.666 [2024-11-27 05:27:55.416418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580444 ] 00:06:07.666 [2024-11-27 05:27:55.508646] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1580218 has claimed it. 00:06:07.666 [2024-11-27 05:27:55.508688] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1580444) - No such process 00:06:08.235 ERROR: process (pid: 1580444) is no longer running 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1580218 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1580218 ']' 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1580218 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1580218 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1580218' 00:06:08.235 killing process with pid 1580218 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1580218 00:06:08.235 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1580218 00:06:08.495 00:06:08.495 real 0m1.937s 00:06:08.495 user 0m5.562s 00:06:08.495 sys 0m0.437s 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.495 ************************************ 00:06:08.495 END TEST locking_overlapped_coremask 00:06:08.495 ************************************ 00:06:08.495 05:27:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:08.495 05:27:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.495 05:27:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.495 05:27:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.495 ************************************ 00:06:08.495 START TEST locking_overlapped_coremask_via_rpc 00:06:08.495 ************************************ 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1580701 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1580701 /var/tmp/spdk.sock 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1580701 ']' 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.495 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.754 [2024-11-27 05:27:56.536601] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:08.754 [2024-11-27 05:27:56.536644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580701 ] 00:06:08.754 [2024-11-27 05:27:56.611025] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.754 [2024-11-27 05:27:56.611050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.754 [2024-11-27 05:27:56.655388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.754 [2024-11-27 05:27:56.655493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.754 [2024-11-27 05:27:56.655493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1580711 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1580711 /var/tmp/spdk2.sock 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1580711 ']' 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.013 05:27:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.013 [2024-11-27 05:27:56.914974] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:09.013 [2024-11-27 05:27:56.915018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580711 ] 00:06:09.013 [2024-11-27 05:27:57.005301] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.013 [2024-11-27 05:27:57.005326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.272 [2024-11-27 05:27:57.088414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.272 [2024-11-27 05:27:57.091718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.272 [2024-11-27 05:27:57.091719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.841 [2024-11-27 05:27:57.772741] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1580701 has claimed it. 00:06:09.841 request: 00:06:09.841 { 00:06:09.841 "method": "framework_enable_cpumask_locks", 00:06:09.841 "req_id": 1 00:06:09.841 } 00:06:09.841 Got JSON-RPC error response 00:06:09.841 response: 00:06:09.841 { 00:06:09.841 "code": -32603, 00:06:09.841 "message": "Failed to claim CPU core: 2" 00:06:09.841 } 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1580701 /var/tmp/spdk.sock 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1580701 ']' 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.841 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.842 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.842 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1580711 /var/tmp/spdk2.sock 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1580711 ']' 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.101 05:27:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:10.361 00:06:10.361 real 0m1.691s 00:06:10.361 user 0m0.817s 00:06:10.361 sys 0m0.129s 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.361 05:27:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.361 ************************************ 00:06:10.361 END TEST locking_overlapped_coremask_via_rpc 00:06:10.361 ************************************ 00:06:10.361 05:27:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:10.361 05:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1580701 ]] 00:06:10.361 05:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1580701 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1580701 ']' 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1580701 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1580701 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1580701' 00:06:10.361 killing process with pid 1580701 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1580701 00:06:10.361 05:27:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1580701 00:06:10.621 05:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1580711 ]] 00:06:10.621 05:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1580711 00:06:10.621 05:27:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1580711 ']' 00:06:10.621 05:27:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1580711 00:06:10.621 05:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.621 05:27:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.621 05:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1580711 00:06:10.880 05:27:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:10.880 05:27:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:10.880 05:27:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1580711' 00:06:10.880 killing process with pid 1580711 00:06:10.880 05:27:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1580711 00:06:10.880 05:27:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1580711 00:06:11.140 05:27:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.140 05:27:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:11.140 05:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1580701 ]] 00:06:11.140 05:27:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1580701 00:06:11.140 05:27:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1580701 ']' 00:06:11.140 05:27:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1580701 00:06:11.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1580701) - No such process 00:06:11.140 05:27:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1580701 is not found' 00:06:11.140 Process with pid 1580701 is not found 00:06:11.140 05:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1580711 ]] 00:06:11.140 05:27:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1580711 00:06:11.141 05:27:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1580711 ']' 00:06:11.141 05:27:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1580711 00:06:11.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1580711) - No such process 00:06:11.141 05:27:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1580711 is not found' 00:06:11.141 Process with pid 1580711 is not found 00:06:11.141 05:27:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:11.141 00:06:11.141 real 0m14.877s 00:06:11.141 user 0m26.338s 00:06:11.141 sys 0m4.950s 00:06:11.141 05:27:58 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.141 05:27:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.141 ************************************ 00:06:11.141 END TEST cpu_locks 00:06:11.141 ************************************ 00:06:11.141 00:06:11.141 real 0m39.135s 00:06:11.141 user 1m14.432s 00:06:11.141 sys 0m8.493s 00:06:11.141 05:27:58 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.141 05:27:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.141 ************************************ 00:06:11.141 END TEST event 00:06:11.141 ************************************ 00:06:11.141 05:27:59 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.141 05:27:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.141 05:27:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.141 05:27:59 -- common/autotest_common.sh@10 -- # set +x 00:06:11.141 ************************************ 00:06:11.141 START TEST thread 00:06:11.141 ************************************ 00:06:11.141 05:27:59 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:11.141 * Looking for test storage... 00:06:11.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:11.141 05:27:59 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.141 05:27:59 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.141 05:27:59 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.406 05:27:59 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.406 05:27:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.406 05:27:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.406 05:27:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.406 05:27:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.406 05:27:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.406 05:27:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.406 05:27:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.406 05:27:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.406 05:27:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.406 05:27:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.406 05:27:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.406 05:27:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:11.406 05:27:59 thread -- scripts/common.sh@345 -- # : 1 00:06:11.406 05:27:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.406 05:27:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.406 05:27:59 thread -- scripts/common.sh@365 -- # decimal 1 00:06:11.406 05:27:59 thread -- scripts/common.sh@353 -- # local d=1 00:06:11.406 05:27:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.406 05:27:59 thread -- scripts/common.sh@355 -- # echo 1 00:06:11.406 05:27:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.406 05:27:59 thread -- scripts/common.sh@366 -- # decimal 2 00:06:11.407 05:27:59 thread -- scripts/common.sh@353 -- # local d=2 00:06:11.407 05:27:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.407 05:27:59 thread -- scripts/common.sh@355 -- # echo 2 00:06:11.407 05:27:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.407 05:27:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.407 05:27:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.407 05:27:59 thread -- scripts/common.sh@368 -- # return 0 00:06:11.407 05:27:59 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.407 05:27:59 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.407 --rc genhtml_branch_coverage=1 00:06:11.407 --rc genhtml_function_coverage=1 00:06:11.407 --rc genhtml_legend=1 00:06:11.407 --rc geninfo_all_blocks=1 00:06:11.407 --rc geninfo_unexecuted_blocks=1 00:06:11.407 00:06:11.407 ' 00:06:11.407 05:27:59 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.407 --rc genhtml_branch_coverage=1 00:06:11.407 --rc genhtml_function_coverage=1 00:06:11.407 --rc genhtml_legend=1 00:06:11.407 --rc geninfo_all_blocks=1 00:06:11.407 --rc geninfo_unexecuted_blocks=1 00:06:11.407 00:06:11.407 ' 00:06:11.407 05:27:59 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.407 --rc genhtml_branch_coverage=1 00:06:11.408 --rc genhtml_function_coverage=1 00:06:11.408 --rc genhtml_legend=1 00:06:11.408 --rc geninfo_all_blocks=1 00:06:11.408 --rc geninfo_unexecuted_blocks=1 00:06:11.408 00:06:11.408 ' 00:06:11.408 05:27:59 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.408 --rc genhtml_branch_coverage=1 00:06:11.408 --rc genhtml_function_coverage=1 00:06:11.408 --rc genhtml_legend=1 00:06:11.408 --rc geninfo_all_blocks=1 00:06:11.408 --rc geninfo_unexecuted_blocks=1 00:06:11.408 00:06:11.408 ' 00:06:11.408 05:27:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.408 05:27:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:11.408 05:27:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.408 05:27:59 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.408 ************************************ 00:06:11.408 START TEST thread_poller_perf 00:06:11.408 ************************************ 00:06:11.408 05:27:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:11.408 [2024-11-27 05:27:59.280705] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:11.408 [2024-11-27 05:27:59.280774] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581270 ] 00:06:11.408 [2024-11-27 05:27:59.360254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.408 [2024-11-27 05:27:59.399520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.408 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:12.817 [2024-11-27T04:28:00.821Z] ====================================== 00:06:12.817 [2024-11-27T04:28:00.821Z] busy:2105121138 (cyc) 00:06:12.817 [2024-11-27T04:28:00.821Z] total_run_count: 416000 00:06:12.817 [2024-11-27T04:28:00.821Z] tsc_hz: 2100000000 (cyc) 00:06:12.817 [2024-11-27T04:28:00.821Z] ====================================== 00:06:12.817 [2024-11-27T04:28:00.821Z] poller_cost: 5060 (cyc), 2409 (nsec) 00:06:12.817 00:06:12.817 real 0m1.188s 00:06:12.817 user 0m1.105s 00:06:12.817 sys 0m0.079s 00:06:12.817 05:28:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.817 05:28:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.817 ************************************ 00:06:12.817 END TEST thread_poller_perf 00:06:12.817 ************************************ 00:06:12.817 05:28:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.817 05:28:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:12.817 05:28:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.817 05:28:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.817 ************************************ 00:06:12.817 START TEST thread_poller_perf 00:06:12.817 ************************************ 00:06:12.817 05:28:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:12.817 [2024-11-27 05:28:00.538197] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:12.817 [2024-11-27 05:28:00.538266] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581496 ] 00:06:12.817 [2024-11-27 05:28:00.617196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.817 [2024-11-27 05:28:00.657357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.817 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:13.754 [2024-11-27T04:28:01.758Z] ====================================== 00:06:13.754 [2024-11-27T04:28:01.758Z] busy:2101484792 (cyc) 00:06:13.754 [2024-11-27T04:28:01.758Z] total_run_count: 5595000 00:06:13.754 [2024-11-27T04:28:01.758Z] tsc_hz: 2100000000 (cyc) 00:06:13.754 [2024-11-27T04:28:01.758Z] ====================================== 00:06:13.754 [2024-11-27T04:28:01.758Z] poller_cost: 375 (cyc), 178 (nsec) 00:06:13.754 00:06:13.754 real 0m1.180s 00:06:13.754 user 0m1.104s 00:06:13.754 sys 0m0.072s 00:06:13.754 05:28:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.754 05:28:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.754 ************************************ 00:06:13.754 END TEST thread_poller_perf 00:06:13.754 ************************************ 00:06:13.754 05:28:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:13.754 00:06:13.754 real 0m2.687s 00:06:13.754 user 0m2.371s 00:06:13.754 sys 0m0.331s 00:06:13.754 05:28:01 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.754 05:28:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.754 ************************************ 00:06:13.754 END TEST thread 00:06:13.754 ************************************ 00:06:14.013 05:28:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:14.013 05:28:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.013 05:28:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.013 05:28:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.013 05:28:01 -- common/autotest_common.sh@10 -- # set +x 00:06:14.013 ************************************ 00:06:14.013 START TEST app_cmdline 00:06:14.013 ************************************ 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.013 * Looking for test storage... 00:06:14.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.013 05:28:01 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.013 --rc genhtml_branch_coverage=1 00:06:14.013 --rc genhtml_function_coverage=1 00:06:14.013 --rc genhtml_legend=1 00:06:14.013 --rc geninfo_all_blocks=1 00:06:14.013 --rc geninfo_unexecuted_blocks=1 00:06:14.013 00:06:14.013 ' 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.013 --rc genhtml_branch_coverage=1 00:06:14.013 --rc genhtml_function_coverage=1 00:06:14.013 --rc genhtml_legend=1 00:06:14.013 --rc geninfo_all_blocks=1 00:06:14.013 --rc geninfo_unexecuted_blocks=1 00:06:14.013 00:06:14.013 ' 00:06:14.013 05:28:01 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.013 --rc genhtml_branch_coverage=1 00:06:14.013 --rc genhtml_function_coverage=1 00:06:14.013 --rc genhtml_legend=1 00:06:14.013 --rc geninfo_all_blocks=1 00:06:14.013 --rc geninfo_unexecuted_blocks=1 00:06:14.014 00:06:14.014 ' 00:06:14.014 05:28:01 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.014 --rc genhtml_branch_coverage=1 00:06:14.014 --rc genhtml_function_coverage=1 00:06:14.014 --rc genhtml_legend=1 00:06:14.014 --rc geninfo_all_blocks=1 00:06:14.014 --rc geninfo_unexecuted_blocks=1 00:06:14.014 00:06:14.014 ' 00:06:14.014 05:28:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:14.014 05:28:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1581826 00:06:14.014 05:28:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1581826 00:06:14.014 05:28:01 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:14.014 05:28:01 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1581826 ']' 00:06:14.014 05:28:01 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.014 05:28:01 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.014 05:28:01 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.014 05:28:01 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.014 05:28:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.273 [2024-11-27 05:28:02.028594] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:14.273 [2024-11-27 05:28:02.028643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581826 ] 00:06:14.273 [2024-11-27 05:28:02.102900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.273 [2024-11-27 05:28:02.142543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.532 05:28:02 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.532 05:28:02 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:14.532 05:28:02 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:14.791 { 00:06:14.791 "version": "SPDK v25.01-pre git sha1 a640d9f98", 00:06:14.791 "fields": { 00:06:14.791 "major": 25, 00:06:14.791 "minor": 1, 00:06:14.791 "patch": 0, 00:06:14.791 "suffix": "-pre", 00:06:14.791 "commit": "a640d9f98" 00:06:14.791 } 00:06:14.791 } 00:06:14.791 05:28:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.791 05:28:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.791 05:28:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.791 05:28:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.791 05:28:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.791 05:28:02 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.791 05:28:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.792 05:28:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.792 05:28:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.792 05:28:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.792 05:28:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.792 05:28:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.792 request: 00:06:14.792 { 00:06:14.792 "method": "env_dpdk_get_mem_stats", 00:06:14.792 "req_id": 1 00:06:14.792 } 00:06:14.792 Got JSON-RPC error response 00:06:14.792 response: 00:06:14.792 { 00:06:14.792 "code": -32601, 00:06:14.792 "message": "Method not found" 00:06:14.792 } 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.792 05:28:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1581826 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1581826 ']' 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1581826 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.792 05:28:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1581826 00:06:15.051 05:28:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.051 05:28:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.051 05:28:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1581826' 00:06:15.051 killing process with pid 1581826 00:06:15.051 05:28:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 1581826 00:06:15.051 05:28:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 1581826 00:06:15.310 00:06:15.310 real 0m1.325s 00:06:15.310 user 0m1.528s 00:06:15.310 sys 0m0.445s 00:06:15.310 05:28:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.310 05:28:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.310 ************************************ 00:06:15.310 END TEST app_cmdline 00:06:15.310 ************************************ 00:06:15.310 05:28:03 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.310 05:28:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.310 05:28:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.310 05:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.310 ************************************ 00:06:15.310 START TEST version 00:06:15.310 ************************************ 00:06:15.310 05:28:03 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.310 * Looking for test storage... 00:06:15.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:15.310 05:28:03 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.310 05:28:03 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.310 05:28:03 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.570 05:28:03 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.570 05:28:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.570 05:28:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.570 05:28:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.571 05:28:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.571 05:28:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.571 05:28:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.571 05:28:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.571 05:28:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.571 05:28:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.571 05:28:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.571 05:28:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.571 05:28:03 version -- scripts/common.sh@344 -- # case "$op" in 00:06:15.571 05:28:03 version -- scripts/common.sh@345 -- # : 1 00:06:15.571 05:28:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.571 05:28:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.571 05:28:03 version -- scripts/common.sh@365 -- # decimal 1 00:06:15.571 05:28:03 version -- scripts/common.sh@353 -- # local d=1 00:06:15.571 05:28:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.571 05:28:03 version -- scripts/common.sh@355 -- # echo 1 00:06:15.571 05:28:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.571 05:28:03 version -- scripts/common.sh@366 -- # decimal 2 00:06:15.571 05:28:03 version -- scripts/common.sh@353 -- # local d=2 00:06:15.571 05:28:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.571 05:28:03 version -- scripts/common.sh@355 -- # echo 2 00:06:15.571 05:28:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.571 05:28:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.571 05:28:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.571 05:28:03 version -- scripts/common.sh@368 -- # return 0 00:06:15.571 05:28:03 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.571 05:28:03 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.571 --rc genhtml_branch_coverage=1 00:06:15.571 --rc genhtml_function_coverage=1 00:06:15.571 --rc genhtml_legend=1 00:06:15.571 --rc geninfo_all_blocks=1 00:06:15.571 --rc geninfo_unexecuted_blocks=1 00:06:15.571 00:06:15.571 ' 00:06:15.571 05:28:03 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.571 --rc genhtml_branch_coverage=1 00:06:15.571 --rc genhtml_function_coverage=1 00:06:15.571 --rc genhtml_legend=1 00:06:15.571 --rc geninfo_all_blocks=1 00:06:15.571 --rc geninfo_unexecuted_blocks=1 00:06:15.571 00:06:15.571 ' 00:06:15.571 05:28:03 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.571 --rc genhtml_branch_coverage=1 00:06:15.571 --rc genhtml_function_coverage=1 00:06:15.571 --rc genhtml_legend=1 00:06:15.571 --rc geninfo_all_blocks=1 00:06:15.571 --rc geninfo_unexecuted_blocks=1 00:06:15.571 00:06:15.571 ' 00:06:15.571 05:28:03 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.571 --rc genhtml_branch_coverage=1 00:06:15.571 --rc genhtml_function_coverage=1 00:06:15.571 --rc genhtml_legend=1 00:06:15.571 --rc geninfo_all_blocks=1 00:06:15.571 --rc geninfo_unexecuted_blocks=1 00:06:15.571 00:06:15.571 ' 00:06:15.571 05:28:03 version -- app/version.sh@17 -- # get_header_version major 00:06:15.571 05:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # cut -f2 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.571 05:28:03 version -- app/version.sh@17 -- # major=25 00:06:15.571 05:28:03 version -- app/version.sh@18 -- # get_header_version minor 00:06:15.571 05:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # cut -f2 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.571 05:28:03 version -- app/version.sh@18 -- # minor=1 00:06:15.571 05:28:03 version -- app/version.sh@19 -- # get_header_version patch 00:06:15.571 05:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # cut -f2 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.571 05:28:03 version -- app/version.sh@19 -- # patch=0 00:06:15.571 05:28:03 version -- app/version.sh@20 -- # get_header_version suffix 00:06:15.571 05:28:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # cut -f2 00:06:15.571 05:28:03 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.571 05:28:03 version -- app/version.sh@20 -- # suffix=-pre 00:06:15.571 05:28:03 version -- app/version.sh@22 -- # version=25.1 00:06:15.571 05:28:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.571 05:28:03 version -- app/version.sh@28 -- # version=25.1rc0 00:06:15.571 05:28:03 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:15.571 05:28:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.571 05:28:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:15.571 05:28:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:15.571 00:06:15.571 real 0m0.240s 00:06:15.571 user 0m0.152s 00:06:15.571 sys 0m0.132s 00:06:15.571 05:28:03 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.571 05:28:03 version -- common/autotest_common.sh@10 -- # set +x 00:06:15.571 ************************************ 00:06:15.571 END TEST version 00:06:15.571 ************************************ 00:06:15.571 05:28:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:15.571 05:28:03 -- spdk/autotest.sh@194 -- # uname -s 00:06:15.571 05:28:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:15.571 05:28:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.571 05:28:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.571 05:28:03 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:15.571 05:28:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.571 05:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.571 05:28:03 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:15.571 05:28:03 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:15.571 05:28:03 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.571 05:28:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.571 05:28:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.571 05:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:15.571 ************************************ 00:06:15.571 START TEST nvmf_tcp 00:06:15.571 ************************************ 00:06:15.571 05:28:03 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.831 * Looking for test storage... 00:06:15.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.831 05:28:03 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.831 05:28:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.831 05:28:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.831 05:28:03 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.831 05:28:03 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:15.831 05:28:03 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.831 05:28:03 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.831 --rc genhtml_branch_coverage=1 00:06:15.831 --rc genhtml_function_coverage=1 00:06:15.831 --rc genhtml_legend=1 00:06:15.831 --rc geninfo_all_blocks=1 00:06:15.831 --rc geninfo_unexecuted_blocks=1 00:06:15.831 00:06:15.831 ' 00:06:15.831 05:28:03 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.831 --rc genhtml_branch_coverage=1 00:06:15.831 --rc genhtml_function_coverage=1 00:06:15.831 --rc genhtml_legend=1 00:06:15.831 --rc geninfo_all_blocks=1 00:06:15.831 --rc geninfo_unexecuted_blocks=1 00:06:15.832 00:06:15.832 ' 00:06:15.832 05:28:03 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.832 --rc genhtml_branch_coverage=1 00:06:15.832 --rc genhtml_function_coverage=1 00:06:15.832 --rc genhtml_legend=1 00:06:15.832 --rc geninfo_all_blocks=1 00:06:15.832 --rc geninfo_unexecuted_blocks=1 00:06:15.832 00:06:15.832 ' 00:06:15.832 05:28:03 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.832 --rc genhtml_branch_coverage=1 00:06:15.832 --rc genhtml_function_coverage=1 00:06:15.832 --rc genhtml_legend=1 00:06:15.832 --rc geninfo_all_blocks=1 00:06:15.832 --rc geninfo_unexecuted_blocks=1 00:06:15.832 00:06:15.832 ' 00:06:15.832 05:28:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:15.832 05:28:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.832 05:28:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:15.832 05:28:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.832 05:28:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.832 05:28:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.832 ************************************ 00:06:15.832 START TEST nvmf_target_core 00:06:15.832 ************************************ 00:06:15.832 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:16.091 * Looking for test storage... 00:06:16.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.091 --rc genhtml_branch_coverage=1 00:06:16.091 --rc genhtml_function_coverage=1 00:06:16.091 --rc genhtml_legend=1 00:06:16.091 --rc geninfo_all_blocks=1 00:06:16.091 --rc geninfo_unexecuted_blocks=1 00:06:16.091 00:06:16.091 ' 00:06:16.091 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.091 --rc genhtml_branch_coverage=1 00:06:16.091 --rc genhtml_function_coverage=1 00:06:16.091 --rc genhtml_legend=1 00:06:16.091 --rc geninfo_all_blocks=1 00:06:16.092 --rc geninfo_unexecuted_blocks=1 00:06:16.092 00:06:16.092 ' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.092 --rc genhtml_branch_coverage=1 00:06:16.092 --rc genhtml_function_coverage=1 00:06:16.092 --rc genhtml_legend=1 00:06:16.092 --rc geninfo_all_blocks=1 00:06:16.092 --rc geninfo_unexecuted_blocks=1 00:06:16.092 00:06:16.092 ' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.092 --rc genhtml_branch_coverage=1 00:06:16.092 --rc genhtml_function_coverage=1 00:06:16.092 --rc genhtml_legend=1 00:06:16.092 --rc geninfo_all_blocks=1 00:06:16.092 --rc geninfo_unexecuted_blocks=1 00:06:16.092 00:06:16.092 ' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.092 05:28:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:16.092 ************************************ 00:06:16.092 START TEST nvmf_abort 00:06:16.092 ************************************ 00:06:16.092 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:16.092 * Looking for test storage... 00:06:16.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.352 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.353 --rc genhtml_branch_coverage=1 00:06:16.353 --rc genhtml_function_coverage=1 00:06:16.353 --rc genhtml_legend=1 00:06:16.353 --rc geninfo_all_blocks=1 00:06:16.353 --rc geninfo_unexecuted_blocks=1 00:06:16.353 00:06:16.353 ' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.353 --rc genhtml_branch_coverage=1 00:06:16.353 --rc genhtml_function_coverage=1 00:06:16.353 --rc genhtml_legend=1 00:06:16.353 --rc geninfo_all_blocks=1 00:06:16.353 --rc geninfo_unexecuted_blocks=1 00:06:16.353 00:06:16.353 ' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.353 --rc genhtml_branch_coverage=1 00:06:16.353 --rc genhtml_function_coverage=1 00:06:16.353 --rc genhtml_legend=1 00:06:16.353 --rc geninfo_all_blocks=1 00:06:16.353 --rc geninfo_unexecuted_blocks=1 00:06:16.353 00:06:16.353 ' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.353 --rc genhtml_branch_coverage=1 00:06:16.353 --rc genhtml_function_coverage=1 00:06:16.353 --rc genhtml_legend=1 00:06:16.353 --rc geninfo_all_blocks=1 00:06:16.353 --rc geninfo_unexecuted_blocks=1 00:06:16.353 00:06:16.353 ' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:16.353 05:28:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:22.927 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:22.927 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:22.927 Found net devices under 0000:86:00.0: cvl_0_0 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:22.927 Found net devices under 0000:86:00.1: cvl_0_1 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.927 05:28:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:22.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:06:22.927 00:06:22.927 --- 10.0.0.2 ping statistics --- 00:06:22.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.927 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:06:22.927 00:06:22.927 --- 10.0.0.1 ping statistics --- 00:06:22.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.927 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:22.927 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1585477 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1585477 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1585477 ']' 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.928 05:28:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.928 [2024-11-27 05:28:10.315475] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:22.928 [2024-11-27 05:28:10.315518] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.928 [2024-11-27 05:28:10.395610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.928 [2024-11-27 05:28:10.437494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.928 [2024-11-27 05:28:10.437533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.928 [2024-11-27 05:28:10.437540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.928 [2024-11-27 05:28:10.437546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.928 [2024-11-27 05:28:10.437551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.928 [2024-11-27 05:28:10.439023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.928 [2024-11-27 05:28:10.439109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.928 [2024-11-27 05:28:10.439109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.185 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.185 [2024-11-27 05:28:11.186326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.443 Malloc0 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.443 Delay0 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.443 [2024-11-27 05:28:11.263796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.443 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.444 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.444 05:28:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:23.444 [2024-11-27 05:28:11.440813] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:25.977 Initializing NVMe Controllers 00:06:25.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:25.977 controller IO queue size 128 less than required 00:06:25.977 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:25.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:25.977 Initialization complete. Launching workers. 00:06:25.977 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36816 00:06:25.977 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36877, failed to submit 62 00:06:25.977 success 36820, unsuccessful 57, failed 0 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:25.978 rmmod nvme_tcp 00:06:25.978 rmmod nvme_fabrics 00:06:25.978 rmmod nvme_keyring 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1585477 ']' 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1585477 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1585477 ']' 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1585477 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585477 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585477' 00:06:25.978 killing process with pid 1585477 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1585477 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1585477 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.978 05:28:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:28.516 00:06:28.516 real 0m11.894s 00:06:28.516 user 0m13.745s 00:06:28.516 sys 0m5.502s 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.516 ************************************ 00:06:28.516 END TEST nvmf_abort 00:06:28.516 ************************************ 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:28.516 ************************************ 00:06:28.516 START TEST nvmf_ns_hotplug_stress 00:06:28.516 ************************************ 00:06:28.516 05:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:28.516 * Looking for test storage... 00:06:28.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.516 --rc genhtml_branch_coverage=1 00:06:28.516 --rc genhtml_function_coverage=1 00:06:28.516 --rc genhtml_legend=1 00:06:28.516 --rc geninfo_all_blocks=1 00:06:28.516 --rc geninfo_unexecuted_blocks=1 00:06:28.516 00:06:28.516 ' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.516 --rc genhtml_branch_coverage=1 00:06:28.516 --rc genhtml_function_coverage=1 00:06:28.516 --rc genhtml_legend=1 00:06:28.516 --rc geninfo_all_blocks=1 00:06:28.516 --rc geninfo_unexecuted_blocks=1 00:06:28.516 00:06:28.516 ' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.516 --rc genhtml_branch_coverage=1 00:06:28.516 --rc genhtml_function_coverage=1 00:06:28.516 --rc genhtml_legend=1 00:06:28.516 --rc geninfo_all_blocks=1 00:06:28.516 --rc geninfo_unexecuted_blocks=1 00:06:28.516 00:06:28.516 ' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.516 --rc genhtml_branch_coverage=1 00:06:28.516 --rc genhtml_function_coverage=1 00:06:28.516 --rc genhtml_legend=1 00:06:28.516 --rc geninfo_all_blocks=1 00:06:28.516 --rc geninfo_unexecuted_blocks=1 00:06:28.516 00:06:28.516 ' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.516 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:28.517 05:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:35.100 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:35.100 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:35.100 Found net devices under 0000:86:00.0: cvl_0_0 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:35.100 Found net devices under 0000:86:00.1: cvl_0_1 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.100 05:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.100 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:06:35.100 00:06:35.101 --- 10.0.0.2 ping statistics --- 00:06:35.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.101 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:06:35.101 00:06:35.101 --- 10.0.0.1 ping statistics --- 00:06:35.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.101 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1589544 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1589544 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1589544 ']' 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.101 [2024-11-27 05:28:22.301379] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:06:35.101 [2024-11-27 05:28:22.301420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.101 [2024-11-27 05:28:22.377187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.101 [2024-11-27 05:28:22.415941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.101 [2024-11-27 05:28:22.415977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.101 [2024-11-27 05:28:22.415984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.101 [2024-11-27 05:28:22.415990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.101 [2024-11-27 05:28:22.415994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.101 [2024-11-27 05:28:22.417387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.101 [2024-11-27 05:28:22.417471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.101 [2024-11-27 05:28:22.417472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:35.101 [2024-11-27 05:28:22.743725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:35.101 05:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.378 [2024-11-27 05:28:23.145166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.378 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.378 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:35.638 Malloc0 00:06:35.638 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:35.897 Delay0 00:06:35.897 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.156 05:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:36.156 NULL1 00:06:36.416 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:36.416 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:36.416 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1590026 00:06:36.416 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:36.416 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.675 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.935 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:36.935 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:36.935 true 00:06:37.196 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:37.196 05:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.196 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.455 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:37.455 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:37.715 true 00:06:37.715 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:37.715 05:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.655 Read completed with error (sct=0, sc=11) 00:06:38.915 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.915 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.915 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:38.915 05:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:39.175 true 00:06:39.175 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:39.175 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.434 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.694 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:39.694 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:39.694 true 00:06:39.694 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:39.694 05:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.075 05:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.075 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:41.075 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:41.333 true 00:06:41.333 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:41.333 05:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.270 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.270 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:42.270 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:42.529 true 00:06:42.529 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:42.529 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.789 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.048 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:43.048 05:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:43.048 true 00:06:43.308 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:43.308 05:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.248 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.507 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.507 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:44.507 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:44.767 true 00:06:44.767 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:44.767 05:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.707 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.707 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:45.707 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:45.966 true 00:06:45.966 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:45.966 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.226 05:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.226 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:46.226 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:46.486 true 00:06:46.486 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:46.486 05:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.685 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.685 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:47.685 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:47.945 true 00:06:47.945 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:47.945 05:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.956 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.956 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:48.956 05:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:49.254 true 00:06:49.254 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:49.254 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.527 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.527 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:49.527 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:49.819 true 00:06:49.819 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:49.819 05:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.795 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.054 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:51.054 05:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:51.313 true 00:06:51.313 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:51.313 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.572 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.832 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:51.832 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:51.832 true 00:06:51.832 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:51.832 05:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.211 05:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.211 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:53.211 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:53.470 true 00:06:53.470 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:53.470 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.729 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.729 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:53.729 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:53.987 true 00:06:53.987 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:53.988 05:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.364 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.364 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:55.364 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:55.624 true 00:06:55.624 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:55.624 05:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.562 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.562 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:56.562 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:56.821 true 00:06:56.821 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:56.821 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.080 05:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.080 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:57.080 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:57.339 true 00:06:57.339 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:57.339 05:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.304 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.563 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:58.563 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:58.822 true 00:06:58.822 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:58.822 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.080 05:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.080 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:59.080 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:59.338 true 00:06:59.338 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:06:59.338 05:28:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.717 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.717 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:00.717 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:00.977 true 00:07:00.977 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:07:00.977 05:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.915 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.915 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:01.915 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:02.179 true 00:07:02.179 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:07:02.179 05:28:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.179 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.440 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:02.440 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:02.698 true 00:07:02.698 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:07:02.698 05:28:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.633 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.893 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:03.893 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:04.152 true 00:07:04.152 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:07:04.152 05:28:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.410 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.410 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:04.410 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:04.669 true 00:07:04.669 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:07:04.669 05:28:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.044 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.044 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.044 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:06.044 05:28:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:06.044 true 00:07:06.302 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:07:06.302 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.302 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.559 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:06.559 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:06.819 Initializing NVMe Controllers 00:07:06.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.819 Controller IO queue size 128, less than required. 00:07:06.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.819 Controller IO queue size 128, less than required. 00:07:06.819 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:06.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:06.819 Initialization complete. Launching workers. 00:07:06.819 ======================================================== 00:07:06.819 Latency(us) 00:07:06.819 Device Information : IOPS MiB/s Average min max 00:07:06.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1576.37 0.77 49404.82 2169.44 1055348.01 00:07:06.819 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15866.58 7.75 8047.04 2326.39 445201.90 00:07:06.819 ======================================================== 00:07:06.819 Total : 17442.94 8.52 11784.66 2169.44 1055348.01 00:07:06.819 00:07:06.819 true 00:07:06.819 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1590026 00:07:06.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1590026) - No such process 00:07:06.819 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1590026 00:07:06.819 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.078 05:28:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.078 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:07.078 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:07.078 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:07.079 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.079 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:07.359 null0 00:07:07.359 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.359 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.359 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:07.618 null1 00:07:07.618 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.618 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.618 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:07.877 null2 00:07:07.877 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.877 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.877 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:07.877 null3 00:07:07.877 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.877 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.877 05:28:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:08.135 null4 00:07:08.135 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.135 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.135 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:08.394 null5 00:07:08.394 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.394 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.394 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:08.654 null6 00:07:08.654 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.654 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.654 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:08.654 null7 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:08.655 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1595416 1595417 1595419 1595421 1595423 1595425 1595427 1595429 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.915 05:28:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.174 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.433 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.692 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.692 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.692 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.692 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.693 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.952 05:28:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.211 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.470 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.729 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.988 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.989 05:28:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.248 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.249 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.507 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.765 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.023 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.024 05:28:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.024 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.282 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.282 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.282 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.282 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.282 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.282 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.283 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.283 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.542 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.802 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:13.061 rmmod nvme_tcp 00:07:13.061 rmmod nvme_fabrics 00:07:13.061 rmmod nvme_keyring 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1589544 ']' 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1589544 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1589544 ']' 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1589544 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1589544 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:13.061 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1589544' 00:07:13.062 killing process with pid 1589544 00:07:13.062 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1589544 00:07:13.062 05:29:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1589544 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.321 05:29:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.231 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:15.231 00:07:15.231 real 0m47.198s 00:07:15.231 user 3m11.441s 00:07:15.231 sys 0m15.476s 00:07:15.231 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.231 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:15.231 ************************************ 00:07:15.231 END TEST nvmf_ns_hotplug_stress 00:07:15.231 ************************************ 00:07:15.231 05:29:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:15.231 05:29:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.231 05:29:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.231 05:29:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.491 ************************************ 00:07:15.491 START TEST nvmf_delete_subsystem 00:07:15.491 ************************************ 00:07:15.491 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:15.491 * Looking for test storage... 00:07:15.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.491 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.491 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.491 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.491 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.492 --rc genhtml_branch_coverage=1 00:07:15.492 --rc genhtml_function_coverage=1 00:07:15.492 --rc genhtml_legend=1 00:07:15.492 --rc geninfo_all_blocks=1 00:07:15.492 --rc geninfo_unexecuted_blocks=1 00:07:15.492 00:07:15.492 ' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.492 --rc genhtml_branch_coverage=1 00:07:15.492 --rc genhtml_function_coverage=1 00:07:15.492 --rc genhtml_legend=1 00:07:15.492 --rc geninfo_all_blocks=1 00:07:15.492 --rc geninfo_unexecuted_blocks=1 00:07:15.492 00:07:15.492 ' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.492 --rc genhtml_branch_coverage=1 00:07:15.492 --rc genhtml_function_coverage=1 00:07:15.492 --rc genhtml_legend=1 00:07:15.492 --rc geninfo_all_blocks=1 00:07:15.492 --rc geninfo_unexecuted_blocks=1 00:07:15.492 00:07:15.492 ' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.492 --rc genhtml_branch_coverage=1 00:07:15.492 --rc genhtml_function_coverage=1 00:07:15.492 --rc genhtml_legend=1 00:07:15.492 --rc geninfo_all_blocks=1 00:07:15.492 --rc geninfo_unexecuted_blocks=1 00:07:15.492 00:07:15.492 ' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:15.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:15.492 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:15.493 05:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:22.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:22.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:22.069 Found net devices under 0000:86:00.0: cvl_0_0 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:22.069 Found net devices under 0000:86:00.1: cvl_0_1 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:22.069 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:22.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:07:22.070 00:07:22.070 --- 10.0.0.2 ping statistics --- 00:07:22.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.070 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:07:22.070 00:07:22.070 --- 10.0.0.1 ping statistics --- 00:07:22.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.070 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1599969 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1599969 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1599969 ']' 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 [2024-11-27 05:29:09.524312] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:07:22.070 [2024-11-27 05:29:09.524358] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.070 [2024-11-27 05:29:09.602597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.070 [2024-11-27 05:29:09.643270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.070 [2024-11-27 05:29:09.643308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.070 [2024-11-27 05:29:09.643318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.070 [2024-11-27 05:29:09.643325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.070 [2024-11-27 05:29:09.643331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.070 [2024-11-27 05:29:09.644526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.070 [2024-11-27 05:29:09.644528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 [2024-11-27 05:29:09.781715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 [2024-11-27 05:29:09.801942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 NULL1 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 Delay0 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1600061 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:22.070 05:29:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:22.070 [2024-11-27 05:29:09.902866] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:23.975 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.975 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.975 05:29:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.234 Read completed with error (sct=0, sc=8) 00:07:24.234 Read completed with error (sct=0, sc=8) 00:07:24.234 Read completed with error (sct=0, sc=8) 00:07:24.234 Read completed with error (sct=0, sc=8) 00:07:24.234 starting I/O failed: -6 00:07:24.234 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 [2024-11-27 05:29:12.021851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0c4a0 is same with the state(6) to be set 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 starting I/O failed: -6 00:07:24.235 Read completed with error (sct=0, sc=8) 00:07:24.235 Write completed with error (sct=0, sc=8) 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 starting I/O failed: -6 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 starting I/O failed: -6 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 starting I/O failed: -6 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 starting I/O failed: -6 00:07:24.236 Write completed with error (sct=0, sc=8) 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 starting I/O failed: -6 00:07:24.236 Read completed with error (sct=0, sc=8) 00:07:24.236 [2024-11-27 05:29:12.022823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc47000d350 is same with the state(6) to be set 00:07:25.191 [2024-11-27 05:29:12.997649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0d9b0 is same with the state(6) to be set 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 [2024-11-27 05:29:13.022836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0c860 is same with the state(6) to be set 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Write completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.191 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 [2024-11-27 05:29:13.025544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc47000d020 is same with the state(6) to be set 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 [2024-11-27 05:29:13.025699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc470000c40 is same with the state(6) to be set 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Write completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 Read completed with error (sct=0, sc=8) 00:07:25.192 [2024-11-27 05:29:13.026381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc47000d680 is same with the state(6) to be set 00:07:25.192 Initializing NVMe Controllers 00:07:25.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:25.192 Controller IO queue size 128, less than required. 00:07:25.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:25.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:25.192 Initialization complete. Launching workers. 00:07:25.192 ======================================================== 00:07:25.192 Latency(us) 00:07:25.192 Device Information : IOPS MiB/s Average min max 00:07:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.04 0.08 876252.11 245.29 1009755.76 00:07:25.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.47 0.08 1081471.14 444.29 2002117.13 00:07:25.192 ======================================================== 00:07:25.192 Total : 322.51 0.16 982814.22 245.29 2002117.13 00:07:25.192 00:07:25.192 [2024-11-27 05:29:13.026978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0d9b0 (9): Bad file descriptor 00:07:25.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:25.192 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.192 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:25.192 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1600061 00:07:25.192 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:25.761 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:25.761 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1600061 00:07:25.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1600061) - No such process 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1600061 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1600061 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1600061 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.762 [2024-11-27 05:29:13.551869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1600654 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:25.762 05:29:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.762 [2024-11-27 05:29:13.644768] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:26.330 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.330 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:26.330 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.589 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.589 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:26.589 05:29:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.159 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.159 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:27.159 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.727 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.727 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:27.727 05:29:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.295 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.295 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:28.295 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.864 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:28.864 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:28.864 05:29:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.864 Initializing NVMe Controllers 00:07:28.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.864 Controller IO queue size 128, less than required. 00:07:28.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:28.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:28.864 Initialization complete. Launching workers. 00:07:28.864 ======================================================== 00:07:28.864 Latency(us) 00:07:28.864 Device Information : IOPS MiB/s Average min max 00:07:28.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002139.36 1000120.12 1005900.77 00:07:28.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004038.71 1000222.21 1010238.26 00:07:28.864 ======================================================== 00:07:28.865 Total : 256.00 0.12 1003089.04 1000120.12 1010238.26 00:07:28.865 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1600654 00:07:29.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1600654) - No such process 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1600654 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:29.123 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.124 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.124 rmmod nvme_tcp 00:07:29.383 rmmod nvme_fabrics 00:07:29.383 rmmod nvme_keyring 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1599969 ']' 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1599969 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1599969 ']' 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1599969 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1599969 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1599969' 00:07:29.383 killing process with pid 1599969 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1599969 00:07:29.383 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1599969 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.648 05:29:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:31.555 00:07:31.555 real 0m16.215s 00:07:31.555 user 0m29.192s 00:07:31.555 sys 0m5.449s 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.555 ************************************ 00:07:31.555 END TEST nvmf_delete_subsystem 00:07:31.555 ************************************ 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.555 ************************************ 00:07:31.555 START TEST nvmf_host_management 00:07:31.555 ************************************ 00:07:31.555 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:31.815 * Looking for test storage... 00:07:31.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.815 --rc genhtml_branch_coverage=1 00:07:31.815 --rc genhtml_function_coverage=1 00:07:31.815 --rc genhtml_legend=1 00:07:31.815 --rc geninfo_all_blocks=1 00:07:31.815 --rc geninfo_unexecuted_blocks=1 00:07:31.815 00:07:31.815 ' 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.815 --rc genhtml_branch_coverage=1 00:07:31.815 --rc genhtml_function_coverage=1 00:07:31.815 --rc genhtml_legend=1 00:07:31.815 --rc geninfo_all_blocks=1 00:07:31.815 --rc geninfo_unexecuted_blocks=1 00:07:31.815 00:07:31.815 ' 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.815 --rc genhtml_branch_coverage=1 00:07:31.815 --rc genhtml_function_coverage=1 00:07:31.815 --rc genhtml_legend=1 00:07:31.815 --rc geninfo_all_blocks=1 00:07:31.815 --rc geninfo_unexecuted_blocks=1 00:07:31.815 00:07:31.815 ' 00:07:31.815 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.815 --rc genhtml_branch_coverage=1 00:07:31.815 --rc genhtml_function_coverage=1 00:07:31.815 --rc genhtml_legend=1 00:07:31.815 --rc geninfo_all_blocks=1 00:07:31.815 --rc geninfo_unexecuted_blocks=1 00:07:31.815 00:07:31.816 ' 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:31.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:31.816 05:29:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:38.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.386 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:38.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:38.387 Found net devices under 0000:86:00.0: cvl_0_0 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:38.387 Found net devices under 0000:86:00.1: cvl_0_1 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:38.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:07:38.387 00:07:38.387 --- 10.0.0.2 ping statistics --- 00:07:38.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.387 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:07:38.387 00:07:38.387 --- 10.0.0.1 ping statistics --- 00:07:38.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.387 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1604766 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1604766 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1604766 ']' 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.387 05:29:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.388 [2024-11-27 05:29:25.861080] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:07:38.388 [2024-11-27 05:29:25.861121] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.388 [2024-11-27 05:29:25.936823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.388 [2024-11-27 05:29:25.978444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.388 [2024-11-27 05:29:25.978477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.388 [2024-11-27 05:29:25.978484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.388 [2024-11-27 05:29:25.978491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.388 [2024-11-27 05:29:25.978496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.388 [2024-11-27 05:29:25.980062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.388 [2024-11-27 05:29:25.980169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.388 [2024-11-27 05:29:25.980253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.388 [2024-11-27 05:29:25.980253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 [2024-11-27 05:29:26.716031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 Malloc0 00:07:38.957 [2024-11-27 05:29:26.783521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1605035 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1605035 /var/tmp/bdevperf.sock 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1605035 ']' 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:38.957 { 00:07:38.957 "params": { 00:07:38.957 "name": "Nvme$subsystem", 00:07:38.957 "trtype": "$TEST_TRANSPORT", 00:07:38.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:38.957 "adrfam": "ipv4", 00:07:38.957 "trsvcid": "$NVMF_PORT", 00:07:38.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:38.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:38.957 "hdgst": ${hdgst:-false}, 00:07:38.957 "ddgst": ${ddgst:-false} 00:07:38.957 }, 00:07:38.957 "method": "bdev_nvme_attach_controller" 00:07:38.957 } 00:07:38.957 EOF 00:07:38.957 )") 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:38.957 05:29:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:38.957 "params": { 00:07:38.957 "name": "Nvme0", 00:07:38.957 "trtype": "tcp", 00:07:38.957 "traddr": "10.0.0.2", 00:07:38.957 "adrfam": "ipv4", 00:07:38.957 "trsvcid": "4420", 00:07:38.957 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.957 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:38.957 "hdgst": false, 00:07:38.957 "ddgst": false 00:07:38.957 }, 00:07:38.957 "method": "bdev_nvme_attach_controller" 00:07:38.957 }' 00:07:38.957 [2024-11-27 05:29:26.879806] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:07:38.957 [2024-11-27 05:29:26.879854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605035 ] 00:07:38.957 [2024-11-27 05:29:26.956288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.217 [2024-11-27 05:29:26.997227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.476 Running I/O for 10 seconds... 00:07:39.735 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.735 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:39.735 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:39.735 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.735 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=932 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 932 -ge 100 ']' 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.996 [2024-11-27 05:29:27.790851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820b0 is same with the state(6) to be set 00:07:39.996 [2024-11-27 05:29:27.790895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820b0 is same with the state(6) to be set 00:07:39.996 [2024-11-27 05:29:27.790903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820b0 is same with the state(6) to be set 00:07:39.996 [2024-11-27 05:29:27.790909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820b0 is same with the state(6) to be set 00:07:39.996 [2024-11-27 05:29:27.790915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820b0 is same with the state(6) to be set 00:07:39.996 [2024-11-27 05:29:27.790921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a820b0 is same with the state(6) to be set 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.996 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.996 [2024-11-27 05:29:27.797240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.996 [2024-11-27 05:29:27.797271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.996 [2024-11-27 05:29:27.797293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.996 [2024-11-27 05:29:27.797313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.996 [2024-11-27 05:29:27.797340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ece510 is same with the state(6) to be set 00:07:39.996 [2024-11-27 05:29:27.797405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.996 [2024-11-27 05:29:27.797419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.996 [2024-11-27 05:29:27.797450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.996 [2024-11-27 05:29:27.797473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.996 [2024-11-27 05:29:27.797495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.996 [2024-11-27 05:29:27.797518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.996 [2024-11-27 05:29:27.797540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.996 [2024-11-27 05:29:27.797561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.996 [2024-11-27 05:29:27.797572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.797982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.797992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.997 [2024-11-27 05:29:27.798439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.997 [2024-11-27 05:29:27.798451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.798840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.998 [2024-11-27 05:29:27.798850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.998 [2024-11-27 05:29:27.799915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:39.998 task offset: 0 on job bdev=Nvme0n1 fails 00:07:39.998 00:07:39.998 Latency(us) 00:07:39.998 [2024-11-27T04:29:28.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.998 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:39.998 Job: Nvme0n1 ended in about 0.52 seconds with error 00:07:39.998 Verification LBA range: start 0x0 length 0x400 00:07:39.998 Nvme0n1 : 0.52 1984.99 124.06 124.06 0.00 29659.57 2168.93 26838.55 00:07:39.998 [2024-11-27T04:29:28.002Z] =================================================================================================================== 00:07:39.998 [2024-11-27T04:29:28.002Z] Total : 1984.99 124.06 124.06 0.00 29659.57 2168.93 26838.55 00:07:39.998 [2024-11-27 05:29:27.802382] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.998 [2024-11-27 05:29:27.802409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ece510 (9): Bad file descriptor 00:07:39.998 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.998 05:29:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:39.998 [2024-11-27 05:29:27.808369] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1605035 00:07:40.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1605035) - No such process 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:40.935 { 00:07:40.935 "params": { 00:07:40.935 "name": "Nvme$subsystem", 00:07:40.935 "trtype": "$TEST_TRANSPORT", 00:07:40.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:40.935 "adrfam": "ipv4", 00:07:40.935 "trsvcid": "$NVMF_PORT", 00:07:40.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:40.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:40.935 "hdgst": ${hdgst:-false}, 00:07:40.935 "ddgst": ${ddgst:-false} 00:07:40.935 }, 00:07:40.935 "method": "bdev_nvme_attach_controller" 00:07:40.935 } 00:07:40.935 EOF 00:07:40.935 )") 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:40.935 05:29:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:40.935 "params": { 00:07:40.935 "name": "Nvme0", 00:07:40.935 "trtype": "tcp", 00:07:40.935 "traddr": "10.0.0.2", 00:07:40.935 "adrfam": "ipv4", 00:07:40.935 "trsvcid": "4420", 00:07:40.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:40.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:40.935 "hdgst": false, 00:07:40.935 "ddgst": false 00:07:40.935 }, 00:07:40.935 "method": "bdev_nvme_attach_controller" 00:07:40.935 }' 00:07:40.935 [2024-11-27 05:29:28.858202] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:07:40.935 [2024-11-27 05:29:28.858247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605363 ] 00:07:40.935 [2024-11-27 05:29:28.936503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.194 [2024-11-27 05:29:28.975526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.194 Running I/O for 1 seconds... 00:07:42.571 2048.00 IOPS, 128.00 MiB/s 00:07:42.571 Latency(us) 00:07:42.571 [2024-11-27T04:29:30.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.571 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:42.571 Verification LBA range: start 0x0 length 0x400 00:07:42.571 Nvme0n1 : 1.03 2055.26 128.45 0.00 0.00 30658.85 4868.39 27088.21 00:07:42.571 [2024-11-27T04:29:30.575Z] =================================================================================================================== 00:07:42.571 [2024-11-27T04:29:30.575Z] Total : 2055.26 128.45 0.00 0.00 30658.85 4868.39 27088.21 00:07:42.571 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:42.571 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:42.571 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:42.571 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.572 rmmod nvme_tcp 00:07:42.572 rmmod nvme_fabrics 00:07:42.572 rmmod nvme_keyring 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1604766 ']' 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1604766 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1604766 ']' 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1604766 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1604766 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1604766' 00:07:42.572 killing process with pid 1604766 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1604766 00:07:42.572 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1604766 00:07:42.831 [2024-11-27 05:29:30.655054] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.831 05:29:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:45.400 00:07:45.400 real 0m13.222s 00:07:45.400 user 0m23.009s 00:07:45.400 sys 0m5.769s 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.400 ************************************ 00:07:45.400 END TEST nvmf_host_management 00:07:45.400 ************************************ 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.400 ************************************ 00:07:45.400 START TEST nvmf_lvol 00:07:45.400 ************************************ 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:45.400 * Looking for test storage... 00:07:45.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.400 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:45.401 05:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 00:07:45.401 ' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 00:07:45.401 ' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 00:07:45.401 ' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.401 --rc genhtml_branch_coverage=1 00:07:45.401 --rc genhtml_function_coverage=1 00:07:45.401 --rc genhtml_legend=1 00:07:45.401 --rc geninfo_all_blocks=1 00:07:45.401 --rc geninfo_unexecuted_blocks=1 00:07:45.401 00:07:45.401 ' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.401 05:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.809 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:50.810 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:50.810 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:50.810 Found net devices under 0000:86:00.0: cvl_0_0 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:50.810 Found net devices under 0000:86:00.1: cvl_0_1 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.810 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.070 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.070 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.070 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.070 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.070 05:29:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:07:51.070 00:07:51.070 --- 10.0.0.2 ping statistics --- 00:07:51.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.070 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:51.070 00:07:51.070 --- 10.0.0.1 ping statistics --- 00:07:51.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.070 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.070 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.071 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:51.071 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:51.071 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.071 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:51.071 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1609292 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1609292 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1609292 ']' 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.330 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.330 [2024-11-27 05:29:39.142619] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:07:51.330 [2024-11-27 05:29:39.142682] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.330 [2024-11-27 05:29:39.221179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.330 [2024-11-27 05:29:39.262642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.330 [2024-11-27 05:29:39.262681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.330 [2024-11-27 05:29:39.262690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.330 [2024-11-27 05:29:39.262697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.330 [2024-11-27 05:29:39.262704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.330 [2024-11-27 05:29:39.264019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.330 [2024-11-27 05:29:39.264124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.330 [2024-11-27 05:29:39.264125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.588 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.588 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:51.588 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.588 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.588 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.588 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.588 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:51.588 [2024-11-27 05:29:39.566317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.847 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:51.847 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:51.847 05:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:52.106 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:52.106 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:52.364 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:52.623 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=95835648-28b0-4111-8926-44bd755bfe92 00:07:52.623 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 95835648-28b0-4111-8926-44bd755bfe92 lvol 20 00:07:52.881 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cf099f43-ef9c-4afb-9d11-a20280556315 00:07:52.881 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.881 05:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cf099f43-ef9c-4afb-9d11-a20280556315 00:07:53.140 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.399 [2024-11-27 05:29:41.226244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.399 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.659 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1609710 00:07:53.660 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:53.660 05:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:54.596 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cf099f43-ef9c-4afb-9d11-a20280556315 MY_SNAPSHOT 00:07:54.855 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4fa964f5-f07d-430c-bea6-69d9ca0bc289 00:07:54.855 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cf099f43-ef9c-4afb-9d11-a20280556315 30 00:07:55.114 05:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4fa964f5-f07d-430c-bea6-69d9ca0bc289 MY_CLONE 00:07:55.373 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bfcb042e-c683-4fd0-b9c0-f67a0e65c13d 00:07:55.373 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bfcb042e-c683-4fd0-b9c0-f67a0e65c13d 00:07:55.941 05:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1609710 00:08:04.061 Initializing NVMe Controllers 00:08:04.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:04.061 Controller IO queue size 128, less than required. 00:08:04.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:04.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:04.061 Initialization complete. Launching workers. 00:08:04.061 ======================================================== 00:08:04.061 Latency(us) 00:08:04.061 Device Information : IOPS MiB/s Average min max 00:08:04.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12011.71 46.92 10662.17 1580.84 42972.18 00:08:04.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12120.41 47.35 10561.64 3532.95 106373.63 00:08:04.061 ======================================================== 00:08:04.061 Total : 24132.12 94.27 10611.68 1580.84 106373.63 00:08:04.061 00:08:04.061 05:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:04.320 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf099f43-ef9c-4afb-9d11-a20280556315 00:08:04.320 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95835648-28b0-4111-8926-44bd755bfe92 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.580 rmmod nvme_tcp 00:08:04.580 rmmod nvme_fabrics 00:08:04.580 rmmod nvme_keyring 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1609292 ']' 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1609292 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1609292 ']' 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1609292 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.580 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1609292 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1609292' 00:08:04.839 killing process with pid 1609292 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1609292 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1609292 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.839 05:29:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.372 00:08:07.372 real 0m22.058s 00:08:07.372 user 1m3.252s 00:08:07.372 sys 0m7.675s 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:07.372 ************************************ 00:08:07.372 END TEST nvmf_lvol 00:08:07.372 ************************************ 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.372 ************************************ 00:08:07.372 START TEST nvmf_lvs_grow 00:08:07.372 ************************************ 00:08:07.372 05:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:07.372 * Looking for test storage... 00:08:07.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.372 --rc genhtml_branch_coverage=1 00:08:07.372 --rc genhtml_function_coverage=1 00:08:07.372 --rc genhtml_legend=1 00:08:07.372 --rc geninfo_all_blocks=1 00:08:07.372 --rc geninfo_unexecuted_blocks=1 00:08:07.372 00:08:07.372 ' 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.372 --rc genhtml_branch_coverage=1 00:08:07.372 --rc genhtml_function_coverage=1 00:08:07.372 --rc genhtml_legend=1 00:08:07.372 --rc geninfo_all_blocks=1 00:08:07.372 --rc geninfo_unexecuted_blocks=1 00:08:07.372 00:08:07.372 ' 00:08:07.372 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.372 --rc genhtml_branch_coverage=1 00:08:07.372 --rc genhtml_function_coverage=1 00:08:07.373 --rc genhtml_legend=1 00:08:07.373 --rc geninfo_all_blocks=1 00:08:07.373 --rc geninfo_unexecuted_blocks=1 00:08:07.373 00:08:07.373 ' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.373 --rc genhtml_branch_coverage=1 00:08:07.373 --rc genhtml_function_coverage=1 00:08:07.373 --rc genhtml_legend=1 00:08:07.373 --rc geninfo_all_blocks=1 00:08:07.373 --rc geninfo_unexecuted_blocks=1 00:08:07.373 00:08:07.373 ' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.373 05:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:13.972 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:13.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:13.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:13.973 Found net devices under 0000:86:00.0: cvl_0_0 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:13.973 Found net devices under 0000:86:00.1: cvl_0_1 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.973 05:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:13.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:08:13.973 00:08:13.973 --- 10.0.0.2 ping statistics --- 00:08:13.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.973 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:13.973 00:08:13.973 --- 10.0.0.1 ping statistics --- 00:08:13.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.973 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1615238 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1615238 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1615238 ']' 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.973 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.974 [2024-11-27 05:30:01.269769] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:13.974 [2024-11-27 05:30:01.269812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.974 [2024-11-27 05:30:01.347537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.974 [2024-11-27 05:30:01.388081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.974 [2024-11-27 05:30:01.388121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.974 [2024-11-27 05:30:01.388130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.974 [2024-11-27 05:30:01.388138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.974 [2024-11-27 05:30:01.388144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.974 [2024-11-27 05:30:01.388777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:13.974 [2024-11-27 05:30:01.689352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.974 ************************************ 00:08:13.974 START TEST lvs_grow_clean 00:08:13.974 ************************************ 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.974 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:14.233 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:14.233 05:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:14.233 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:14.233 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:14.233 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:14.491 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:14.492 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:14.492 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 lvol 150 00:08:14.751 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c77384da-742d-4295-aa75-92a4798cadce 00:08:14.751 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:14.751 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:15.010 [2024-11-27 05:30:02.772729] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:15.010 [2024-11-27 05:30:02.772780] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:15.010 true 00:08:15.010 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:15.010 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:15.010 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:15.010 05:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:15.269 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c77384da-742d-4295-aa75-92a4798cadce 00:08:15.529 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:15.529 [2024-11-27 05:30:03.531015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1615737 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1615737 /var/tmp/bdevperf.sock 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1615737 ']' 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:15.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.789 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:15.789 [2024-11-27 05:30:03.780938] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:15.789 [2024-11-27 05:30:03.780989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1615737 ] 00:08:16.048 [2024-11-27 05:30:03.855678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.048 [2024-11-27 05:30:03.898134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.048 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.048 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:16.048 05:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:16.616 Nvme0n1 00:08:16.616 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:16.616 [ 00:08:16.616 { 00:08:16.616 "name": "Nvme0n1", 00:08:16.616 "aliases": [ 00:08:16.616 "c77384da-742d-4295-aa75-92a4798cadce" 00:08:16.616 ], 00:08:16.616 "product_name": "NVMe disk", 00:08:16.616 "block_size": 4096, 00:08:16.616 "num_blocks": 38912, 00:08:16.616 "uuid": "c77384da-742d-4295-aa75-92a4798cadce", 00:08:16.616 "numa_id": 1, 00:08:16.616 "assigned_rate_limits": { 00:08:16.616 "rw_ios_per_sec": 0, 00:08:16.616 "rw_mbytes_per_sec": 0, 00:08:16.616 "r_mbytes_per_sec": 0, 00:08:16.616 "w_mbytes_per_sec": 0 00:08:16.616 }, 00:08:16.616 "claimed": false, 00:08:16.616 "zoned": false, 00:08:16.616 "supported_io_types": { 00:08:16.616 "read": true, 00:08:16.616 "write": true, 00:08:16.616 "unmap": true, 00:08:16.616 "flush": true, 00:08:16.616 "reset": true, 00:08:16.616 "nvme_admin": true, 00:08:16.616 "nvme_io": true, 00:08:16.616 "nvme_io_md": false, 00:08:16.616 "write_zeroes": true, 00:08:16.616 "zcopy": false, 00:08:16.616 "get_zone_info": false, 00:08:16.616 "zone_management": false, 00:08:16.616 "zone_append": false, 00:08:16.616 "compare": true, 00:08:16.616 "compare_and_write": true, 00:08:16.616 "abort": true, 00:08:16.616 "seek_hole": false, 00:08:16.616 "seek_data": false, 00:08:16.616 "copy": true, 00:08:16.616 "nvme_iov_md": false 00:08:16.616 }, 00:08:16.616 "memory_domains": [ 00:08:16.616 { 00:08:16.616 "dma_device_id": "system", 00:08:16.616 "dma_device_type": 1 00:08:16.616 } 00:08:16.616 ], 00:08:16.616 "driver_specific": { 00:08:16.616 "nvme": [ 00:08:16.616 { 00:08:16.616 "trid": { 00:08:16.616 "trtype": "TCP", 00:08:16.616 "adrfam": "IPv4", 00:08:16.616 "traddr": "10.0.0.2", 00:08:16.616 "trsvcid": "4420", 00:08:16.616 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:16.616 }, 00:08:16.616 "ctrlr_data": { 00:08:16.616 "cntlid": 1, 00:08:16.616 "vendor_id": "0x8086", 00:08:16.616 "model_number": "SPDK bdev Controller", 00:08:16.616 "serial_number": "SPDK0", 00:08:16.616 "firmware_revision": "25.01", 00:08:16.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:16.616 "oacs": { 00:08:16.616 "security": 0, 00:08:16.616 "format": 0, 00:08:16.616 "firmware": 0, 00:08:16.616 "ns_manage": 0 00:08:16.616 }, 00:08:16.616 "multi_ctrlr": true, 00:08:16.616 "ana_reporting": false 00:08:16.616 }, 00:08:16.616 "vs": { 00:08:16.616 "nvme_version": "1.3" 00:08:16.616 }, 00:08:16.616 "ns_data": { 00:08:16.616 "id": 1, 00:08:16.616 "can_share": true 00:08:16.616 } 00:08:16.616 } 00:08:16.616 ], 00:08:16.616 "mp_policy": "active_passive" 00:08:16.617 } 00:08:16.617 } 00:08:16.617 ] 00:08:16.617 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1615809 00:08:16.617 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:16.617 05:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:16.875 Running I/O for 10 seconds... 00:08:17.812 Latency(us) 00:08:17.812 [2024-11-27T04:30:05.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:17.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.812 Nvme0n1 : 1.00 23139.00 90.39 0.00 0.00 0.00 0.00 0.00 00:08:17.812 [2024-11-27T04:30:05.816Z] =================================================================================================================== 00:08:17.813 [2024-11-27T04:30:05.817Z] Total : 23139.00 90.39 0.00 0.00 0.00 0.00 0.00 00:08:17.813 00:08:18.749 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:18.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.749 Nvme0n1 : 2.00 23352.00 91.22 0.00 0.00 0.00 0.00 0.00 00:08:18.749 [2024-11-27T04:30:06.753Z] =================================================================================================================== 00:08:18.749 [2024-11-27T04:30:06.753Z] Total : 23352.00 91.22 0.00 0.00 0.00 0.00 0.00 00:08:18.749 00:08:19.007 true 00:08:19.007 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:19.007 05:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:19.266 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:19.266 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:19.266 05:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1615809 00:08:19.835 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.835 Nvme0n1 : 3.00 23465.33 91.66 0.00 0.00 0.00 0.00 0.00 00:08:19.835 [2024-11-27T04:30:07.839Z] =================================================================================================================== 00:08:19.835 [2024-11-27T04:30:07.839Z] Total : 23465.33 91.66 0.00 0.00 0.00 0.00 0.00 00:08:19.835 00:08:20.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.771 Nvme0n1 : 4.00 23523.50 91.89 0.00 0.00 0.00 0.00 0.00 00:08:20.771 [2024-11-27T04:30:08.775Z] =================================================================================================================== 00:08:20.771 [2024-11-27T04:30:08.775Z] Total : 23523.50 91.89 0.00 0.00 0.00 0.00 0.00 00:08:20.771 00:08:22.148 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.148 Nvme0n1 : 5.00 23568.60 92.06 0.00 0.00 0.00 0.00 0.00 00:08:22.148 [2024-11-27T04:30:10.152Z] =================================================================================================================== 00:08:22.149 [2024-11-27T04:30:10.153Z] Total : 23568.60 92.06 0.00 0.00 0.00 0.00 0.00 00:08:22.149 00:08:22.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.716 Nvme0n1 : 6.00 23560.83 92.03 0.00 0.00 0.00 0.00 0.00 00:08:22.716 [2024-11-27T04:30:10.720Z] =================================================================================================================== 00:08:22.716 [2024-11-27T04:30:10.720Z] Total : 23560.83 92.03 0.00 0.00 0.00 0.00 0.00 00:08:22.716 00:08:24.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.094 Nvme0n1 : 7.00 23588.43 92.14 0.00 0.00 0.00 0.00 0.00 00:08:24.094 [2024-11-27T04:30:12.098Z] =================================================================================================================== 00:08:24.094 [2024-11-27T04:30:12.098Z] Total : 23588.43 92.14 0.00 0.00 0.00 0.00 0.00 00:08:24.094 00:08:25.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.032 Nvme0n1 : 8.00 23610.25 92.23 0.00 0.00 0.00 0.00 0.00 00:08:25.032 [2024-11-27T04:30:13.036Z] =================================================================================================================== 00:08:25.032 [2024-11-27T04:30:13.036Z] Total : 23610.25 92.23 0.00 0.00 0.00 0.00 0.00 00:08:25.032 00:08:25.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.970 Nvme0n1 : 9.00 23633.11 92.32 0.00 0.00 0.00 0.00 0.00 00:08:25.970 [2024-11-27T04:30:13.974Z] =================================================================================================================== 00:08:25.970 [2024-11-27T04:30:13.974Z] Total : 23633.11 92.32 0.00 0.00 0.00 0.00 0.00 00:08:25.970 00:08:26.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.909 Nvme0n1 : 10.00 23646.40 92.37 0.00 0.00 0.00 0.00 0.00 00:08:26.909 [2024-11-27T04:30:14.913Z] =================================================================================================================== 00:08:26.909 [2024-11-27T04:30:14.913Z] Total : 23646.40 92.37 0.00 0.00 0.00 0.00 0.00 00:08:26.909 00:08:26.909 00:08:26.909 Latency(us) 00:08:26.909 [2024-11-27T04:30:14.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.909 Nvme0n1 : 10.01 23645.71 92.37 0.00 0.00 5410.28 3167.57 13606.52 00:08:26.909 [2024-11-27T04:30:14.913Z] =================================================================================================================== 00:08:26.909 [2024-11-27T04:30:14.913Z] Total : 23645.71 92.37 0.00 0.00 5410.28 3167.57 13606.52 00:08:26.909 { 00:08:26.909 "results": [ 00:08:26.909 { 00:08:26.909 "job": "Nvme0n1", 00:08:26.909 "core_mask": "0x2", 00:08:26.909 "workload": "randwrite", 00:08:26.909 "status": "finished", 00:08:26.909 "queue_depth": 128, 00:08:26.909 "io_size": 4096, 00:08:26.909 "runtime": 10.005706, 00:08:26.909 "iops": 23645.70775915263, 00:08:26.909 "mibps": 92.36604593418996, 00:08:26.909 "io_failed": 0, 00:08:26.909 "io_timeout": 0, 00:08:26.909 "avg_latency_us": 5410.281099678932, 00:08:26.909 "min_latency_us": 3167.5733333333333, 00:08:26.909 "max_latency_us": 13606.521904761905 00:08:26.909 } 00:08:26.909 ], 00:08:26.909 "core_count": 1 00:08:26.909 } 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1615737 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1615737 ']' 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1615737 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1615737 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1615737' 00:08:26.909 killing process with pid 1615737 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1615737 00:08:26.909 Received shutdown signal, test time was about 10.000000 seconds 00:08:26.909 00:08:26.909 Latency(us) 00:08:26.909 [2024-11-27T04:30:14.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.909 [2024-11-27T04:30:14.913Z] =================================================================================================================== 00:08:26.909 [2024-11-27T04:30:14.913Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:26.909 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1615737 00:08:27.168 05:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.169 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:27.427 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:27.427 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:27.685 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:27.685 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:27.685 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:27.944 [2024-11-27 05:30:15.705298] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:27.944 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:27.944 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:27.944 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:27.944 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.944 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:27.945 request: 00:08:27.945 { 00:08:27.945 "uuid": "f76f3fcd-d1c3-417d-b53f-85b51182d1f6", 00:08:27.945 "method": "bdev_lvol_get_lvstores", 00:08:27.945 "req_id": 1 00:08:27.945 } 00:08:27.945 Got JSON-RPC error response 00:08:27.945 response: 00:08:27.945 { 00:08:27.945 "code": -19, 00:08:27.945 "message": "No such device" 00:08:27.945 } 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.945 05:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.204 aio_bdev 00:08:28.204 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c77384da-742d-4295-aa75-92a4798cadce 00:08:28.204 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=c77384da-742d-4295-aa75-92a4798cadce 00:08:28.204 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.204 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:28.204 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.204 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.204 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:28.464 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c77384da-742d-4295-aa75-92a4798cadce -t 2000 00:08:28.723 [ 00:08:28.723 { 00:08:28.723 "name": "c77384da-742d-4295-aa75-92a4798cadce", 00:08:28.723 "aliases": [ 00:08:28.723 "lvs/lvol" 00:08:28.723 ], 00:08:28.723 "product_name": "Logical Volume", 00:08:28.723 "block_size": 4096, 00:08:28.723 "num_blocks": 38912, 00:08:28.723 "uuid": "c77384da-742d-4295-aa75-92a4798cadce", 00:08:28.723 "assigned_rate_limits": { 00:08:28.723 "rw_ios_per_sec": 0, 00:08:28.723 "rw_mbytes_per_sec": 0, 00:08:28.723 "r_mbytes_per_sec": 0, 00:08:28.723 "w_mbytes_per_sec": 0 00:08:28.723 }, 00:08:28.723 "claimed": false, 00:08:28.723 "zoned": false, 00:08:28.723 "supported_io_types": { 00:08:28.723 "read": true, 00:08:28.723 "write": true, 00:08:28.723 "unmap": true, 00:08:28.723 "flush": false, 00:08:28.723 "reset": true, 00:08:28.723 "nvme_admin": false, 00:08:28.723 "nvme_io": false, 00:08:28.723 "nvme_io_md": false, 00:08:28.723 "write_zeroes": true, 00:08:28.723 "zcopy": false, 00:08:28.723 "get_zone_info": false, 00:08:28.723 "zone_management": false, 00:08:28.723 "zone_append": false, 00:08:28.723 "compare": false, 00:08:28.723 "compare_and_write": false, 00:08:28.723 "abort": false, 00:08:28.723 "seek_hole": true, 00:08:28.723 "seek_data": true, 00:08:28.723 "copy": false, 00:08:28.723 "nvme_iov_md": false 00:08:28.723 }, 00:08:28.723 "driver_specific": { 00:08:28.723 "lvol": { 00:08:28.723 "lvol_store_uuid": "f76f3fcd-d1c3-417d-b53f-85b51182d1f6", 00:08:28.723 "base_bdev": "aio_bdev", 00:08:28.723 "thin_provision": false, 00:08:28.723 "num_allocated_clusters": 38, 00:08:28.723 "snapshot": false, 00:08:28.723 "clone": false, 00:08:28.723 "esnap_clone": false 00:08:28.723 } 00:08:28.723 } 00:08:28.723 } 00:08:28.723 ] 00:08:28.723 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:28.723 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:28.723 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:28.723 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:28.723 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:28.723 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:28.982 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:28.982 05:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c77384da-742d-4295-aa75-92a4798cadce 00:08:29.241 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f76f3fcd-d1c3-417d-b53f-85b51182d1f6 00:08:29.500 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.501 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.501 00:08:29.501 real 0m15.735s 00:08:29.501 user 0m15.254s 00:08:29.501 sys 0m1.551s 00:08:29.501 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.501 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:29.501 ************************************ 00:08:29.501 END TEST lvs_grow_clean 00:08:29.501 ************************************ 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.760 ************************************ 00:08:29.760 START TEST lvs_grow_dirty 00:08:29.760 ************************************ 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.760 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.020 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:30.020 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:30.020 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9b900605-d374-4781-a794-e045a0a509d4 00:08:30.020 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:30.020 05:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:30.278 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:30.278 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:30.278 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b900605-d374-4781-a794-e045a0a509d4 lvol 150 00:08:30.537 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=35a9460c-14f5-495c-909c-76262c7645fd 00:08:30.537 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:30.537 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.537 [2024-11-27 05:30:18.506505] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.537 [2024-11-27 05:30:18.506559] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.537 true 00:08:30.537 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:30.537 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:30.796 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:30.796 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.055 05:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 35a9460c-14f5-495c-909c-76262c7645fd 00:08:31.315 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.315 [2024-11-27 05:30:19.224636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.315 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1618789 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1618789 /var/tmp/bdevperf.sock 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1618789 ']' 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.574 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.574 [2024-11-27 05:30:19.468226] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:31.574 [2024-11-27 05:30:19.468275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618789 ] 00:08:31.574 [2024-11-27 05:30:19.541789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.833 [2024-11-27 05:30:19.583865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.833 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.833 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:31.834 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.092 Nvme0n1 00:08:32.092 05:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.352 [ 00:08:32.352 { 00:08:32.352 "name": "Nvme0n1", 00:08:32.352 "aliases": [ 00:08:32.352 "35a9460c-14f5-495c-909c-76262c7645fd" 00:08:32.352 ], 00:08:32.352 "product_name": "NVMe disk", 00:08:32.352 "block_size": 4096, 00:08:32.352 "num_blocks": 38912, 00:08:32.352 "uuid": "35a9460c-14f5-495c-909c-76262c7645fd", 00:08:32.352 "numa_id": 1, 00:08:32.352 "assigned_rate_limits": { 00:08:32.352 "rw_ios_per_sec": 0, 00:08:32.352 "rw_mbytes_per_sec": 0, 00:08:32.352 "r_mbytes_per_sec": 0, 00:08:32.352 "w_mbytes_per_sec": 0 00:08:32.352 }, 00:08:32.352 "claimed": false, 00:08:32.352 "zoned": false, 00:08:32.352 "supported_io_types": { 00:08:32.352 "read": true, 00:08:32.352 "write": true, 00:08:32.352 "unmap": true, 00:08:32.352 "flush": true, 00:08:32.352 "reset": true, 00:08:32.352 "nvme_admin": true, 00:08:32.352 "nvme_io": true, 00:08:32.352 "nvme_io_md": false, 00:08:32.352 "write_zeroes": true, 00:08:32.352 "zcopy": false, 00:08:32.352 "get_zone_info": false, 00:08:32.352 "zone_management": false, 00:08:32.352 "zone_append": false, 00:08:32.352 "compare": true, 00:08:32.352 "compare_and_write": true, 00:08:32.352 "abort": true, 00:08:32.352 "seek_hole": false, 00:08:32.352 "seek_data": false, 00:08:32.352 "copy": true, 00:08:32.352 "nvme_iov_md": false 00:08:32.352 }, 00:08:32.352 "memory_domains": [ 00:08:32.352 { 00:08:32.352 "dma_device_id": "system", 00:08:32.352 "dma_device_type": 1 00:08:32.352 } 00:08:32.352 ], 00:08:32.352 "driver_specific": { 00:08:32.352 "nvme": [ 00:08:32.352 { 00:08:32.352 "trid": { 00:08:32.352 "trtype": "TCP", 00:08:32.352 "adrfam": "IPv4", 00:08:32.352 "traddr": "10.0.0.2", 00:08:32.352 "trsvcid": "4420", 00:08:32.352 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.352 }, 00:08:32.352 "ctrlr_data": { 00:08:32.352 "cntlid": 1, 00:08:32.352 "vendor_id": "0x8086", 00:08:32.352 "model_number": "SPDK bdev Controller", 00:08:32.352 "serial_number": "SPDK0", 00:08:32.352 "firmware_revision": "25.01", 00:08:32.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.352 "oacs": { 00:08:32.352 "security": 0, 00:08:32.353 "format": 0, 00:08:32.353 "firmware": 0, 00:08:32.353 "ns_manage": 0 00:08:32.353 }, 00:08:32.353 "multi_ctrlr": true, 00:08:32.353 "ana_reporting": false 00:08:32.353 }, 00:08:32.353 "vs": { 00:08:32.353 "nvme_version": "1.3" 00:08:32.353 }, 00:08:32.353 "ns_data": { 00:08:32.353 "id": 1, 00:08:32.353 "can_share": true 00:08:32.353 } 00:08:32.353 } 00:08:32.353 ], 00:08:32.353 "mp_policy": "active_passive" 00:08:32.353 } 00:08:32.353 } 00:08:32.353 ] 00:08:32.353 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1618879 00:08:32.353 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:32.353 05:30:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.353 Running I/O for 10 seconds... 00:08:33.732 Latency(us) 00:08:33.732 [2024-11-27T04:30:21.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.732 Nvme0n1 : 1.00 22803.00 89.07 0.00 0.00 0.00 0.00 0.00 00:08:33.732 [2024-11-27T04:30:21.736Z] =================================================================================================================== 00:08:33.732 [2024-11-27T04:30:21.736Z] Total : 22803.00 89.07 0.00 0.00 0.00 0.00 0.00 00:08:33.732 00:08:34.301 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:34.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.301 Nvme0n1 : 2.00 23191.50 90.59 0.00 0.00 0.00 0.00 0.00 00:08:34.301 [2024-11-27T04:30:22.305Z] =================================================================================================================== 00:08:34.301 [2024-11-27T04:30:22.305Z] Total : 23191.50 90.59 0.00 0.00 0.00 0.00 0.00 00:08:34.301 00:08:34.558 true 00:08:34.558 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:34.558 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:34.816 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:34.816 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:34.816 05:30:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1618879 00:08:35.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.383 Nvme0n1 : 3.00 23302.00 91.02 0.00 0.00 0.00 0.00 0.00 00:08:35.383 [2024-11-27T04:30:23.387Z] =================================================================================================================== 00:08:35.383 [2024-11-27T04:30:23.387Z] Total : 23302.00 91.02 0.00 0.00 0.00 0.00 0.00 00:08:35.383 00:08:36.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.321 Nvme0n1 : 4.00 23408.00 91.44 0.00 0.00 0.00 0.00 0.00 00:08:36.321 [2024-11-27T04:30:24.325Z] =================================================================================================================== 00:08:36.321 [2024-11-27T04:30:24.325Z] Total : 23408.00 91.44 0.00 0.00 0.00 0.00 0.00 00:08:36.321 00:08:37.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.699 Nvme0n1 : 5.00 23478.20 91.71 0.00 0.00 0.00 0.00 0.00 00:08:37.699 [2024-11-27T04:30:25.703Z] =================================================================================================================== 00:08:37.699 [2024-11-27T04:30:25.703Z] Total : 23478.20 91.71 0.00 0.00 0.00 0.00 0.00 00:08:37.699 00:08:38.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.636 Nvme0n1 : 6.00 23526.17 91.90 0.00 0.00 0.00 0.00 0.00 00:08:38.636 [2024-11-27T04:30:26.640Z] =================================================================================================================== 00:08:38.636 [2024-11-27T04:30:26.641Z] Total : 23526.17 91.90 0.00 0.00 0.00 0.00 0.00 00:08:38.637 00:08:39.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.577 Nvme0n1 : 7.00 23570.57 92.07 0.00 0.00 0.00 0.00 0.00 00:08:39.577 [2024-11-27T04:30:27.581Z] =================================================================================================================== 00:08:39.577 [2024-11-27T04:30:27.581Z] Total : 23570.57 92.07 0.00 0.00 0.00 0.00 0.00 00:08:39.577 00:08:40.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.515 Nvme0n1 : 8.00 23593.12 92.16 0.00 0.00 0.00 0.00 0.00 00:08:40.515 [2024-11-27T04:30:28.519Z] =================================================================================================================== 00:08:40.515 [2024-11-27T04:30:28.519Z] Total : 23593.12 92.16 0.00 0.00 0.00 0.00 0.00 00:08:40.515 00:08:41.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.451 Nvme0n1 : 9.00 23609.33 92.22 0.00 0.00 0.00 0.00 0.00 00:08:41.451 [2024-11-27T04:30:29.455Z] =================================================================================================================== 00:08:41.451 [2024-11-27T04:30:29.455Z] Total : 23609.33 92.22 0.00 0.00 0.00 0.00 0.00 00:08:41.451 00:08:42.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.403 Nvme0n1 : 10.00 23612.10 92.23 0.00 0.00 0.00 0.00 0.00 00:08:42.403 [2024-11-27T04:30:30.407Z] =================================================================================================================== 00:08:42.403 [2024-11-27T04:30:30.407Z] Total : 23612.10 92.23 0.00 0.00 0.00 0.00 0.00 00:08:42.403 00:08:42.403 00:08:42.403 Latency(us) 00:08:42.403 [2024-11-27T04:30:30.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.403 Nvme0n1 : 10.01 23612.93 92.24 0.00 0.00 5417.75 3198.78 15229.32 00:08:42.403 [2024-11-27T04:30:30.407Z] =================================================================================================================== 00:08:42.403 [2024-11-27T04:30:30.407Z] Total : 23612.93 92.24 0.00 0.00 5417.75 3198.78 15229.32 00:08:42.403 { 00:08:42.403 "results": [ 00:08:42.403 { 00:08:42.403 "job": "Nvme0n1", 00:08:42.403 "core_mask": "0x2", 00:08:42.403 "workload": "randwrite", 00:08:42.403 "status": "finished", 00:08:42.403 "queue_depth": 128, 00:08:42.403 "io_size": 4096, 00:08:42.403 "runtime": 10.005068, 00:08:42.403 "iops": 23612.932965573047, 00:08:42.403 "mibps": 92.23801939676972, 00:08:42.403 "io_failed": 0, 00:08:42.403 "io_timeout": 0, 00:08:42.403 "avg_latency_us": 5417.745318702281, 00:08:42.403 "min_latency_us": 3198.7809523809524, 00:08:42.403 "max_latency_us": 15229.318095238095 00:08:42.403 } 00:08:42.403 ], 00:08:42.403 "core_count": 1 00:08:42.403 } 00:08:42.403 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1618789 00:08:42.403 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1618789 ']' 00:08:42.403 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1618789 00:08:42.403 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:42.403 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.403 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1618789 00:08:42.404 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:42.404 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:42.404 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1618789' 00:08:42.404 killing process with pid 1618789 00:08:42.404 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1618789 00:08:42.404 Received shutdown signal, test time was about 10.000000 seconds 00:08:42.404 00:08:42.404 Latency(us) 00:08:42.404 [2024-11-27T04:30:30.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.404 [2024-11-27T04:30:30.408Z] =================================================================================================================== 00:08:42.404 [2024-11-27T04:30:30.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:42.404 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1618789 00:08:42.662 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.920 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.178 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:43.178 05:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:43.178 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:43.178 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:43.178 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1615238 00:08:43.178 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1615238 00:08:43.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1615238 Killed "${NVMF_APP[@]}" "$@" 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1620714 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1620714 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1620714 ']' 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.437 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.437 [2024-11-27 05:30:31.255864] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:43.437 [2024-11-27 05:30:31.255912] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.437 [2024-11-27 05:30:31.332876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.437 [2024-11-27 05:30:31.373233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.437 [2024-11-27 05:30:31.373272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.437 [2024-11-27 05:30:31.373282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.437 [2024-11-27 05:30:31.373288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.437 [2024-11-27 05:30:31.373295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.437 [2024-11-27 05:30:31.373902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.696 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.696 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:43.696 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:43.696 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:43.696 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.696 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.696 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.696 [2024-11-27 05:30:31.680304] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:43.696 [2024-11-27 05:30:31.680402] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:43.696 [2024-11-27 05:30:31.680432] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 35a9460c-14f5-495c-909c-76262c7645fd 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=35a9460c-14f5-495c-909c-76262c7645fd 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:43.955 05:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 35a9460c-14f5-495c-909c-76262c7645fd -t 2000 00:08:44.214 [ 00:08:44.214 { 00:08:44.214 "name": "35a9460c-14f5-495c-909c-76262c7645fd", 00:08:44.214 "aliases": [ 00:08:44.214 "lvs/lvol" 00:08:44.214 ], 00:08:44.214 "product_name": "Logical Volume", 00:08:44.214 "block_size": 4096, 00:08:44.214 "num_blocks": 38912, 00:08:44.214 "uuid": "35a9460c-14f5-495c-909c-76262c7645fd", 00:08:44.214 "assigned_rate_limits": { 00:08:44.214 "rw_ios_per_sec": 0, 00:08:44.214 "rw_mbytes_per_sec": 0, 00:08:44.214 "r_mbytes_per_sec": 0, 00:08:44.214 "w_mbytes_per_sec": 0 00:08:44.214 }, 00:08:44.214 "claimed": false, 00:08:44.214 "zoned": false, 00:08:44.214 "supported_io_types": { 00:08:44.214 "read": true, 00:08:44.214 "write": true, 00:08:44.214 "unmap": true, 00:08:44.214 "flush": false, 00:08:44.214 "reset": true, 00:08:44.214 "nvme_admin": false, 00:08:44.214 "nvme_io": false, 00:08:44.214 "nvme_io_md": false, 00:08:44.214 "write_zeroes": true, 00:08:44.214 "zcopy": false, 00:08:44.214 "get_zone_info": false, 00:08:44.214 "zone_management": false, 00:08:44.214 "zone_append": false, 00:08:44.214 "compare": false, 00:08:44.214 "compare_and_write": false, 00:08:44.214 "abort": false, 00:08:44.214 "seek_hole": true, 00:08:44.214 "seek_data": true, 00:08:44.214 "copy": false, 00:08:44.214 "nvme_iov_md": false 00:08:44.214 }, 00:08:44.214 "driver_specific": { 00:08:44.214 "lvol": { 00:08:44.214 "lvol_store_uuid": "9b900605-d374-4781-a794-e045a0a509d4", 00:08:44.214 "base_bdev": "aio_bdev", 00:08:44.214 "thin_provision": false, 00:08:44.214 "num_allocated_clusters": 38, 00:08:44.214 "snapshot": false, 00:08:44.214 "clone": false, 00:08:44.214 "esnap_clone": false 00:08:44.214 } 00:08:44.214 } 00:08:44.214 } 00:08:44.214 ] 00:08:44.214 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:44.214 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:44.215 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:44.473 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:44.474 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:44.474 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:44.474 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:44.474 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:44.734 [2024-11-27 05:30:32.621244] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:44.734 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:44.993 request: 00:08:44.993 { 00:08:44.993 "uuid": "9b900605-d374-4781-a794-e045a0a509d4", 00:08:44.993 "method": "bdev_lvol_get_lvstores", 00:08:44.993 "req_id": 1 00:08:44.993 } 00:08:44.993 Got JSON-RPC error response 00:08:44.993 response: 00:08:44.993 { 00:08:44.993 "code": -19, 00:08:44.993 "message": "No such device" 00:08:44.993 } 00:08:44.993 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:44.993 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:44.993 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:44.993 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:44.993 05:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.252 aio_bdev 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 35a9460c-14f5-495c-909c-76262c7645fd 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=35a9460c-14f5-495c-909c-76262c7645fd 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.252 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 35a9460c-14f5-495c-909c-76262c7645fd -t 2000 00:08:45.511 [ 00:08:45.511 { 00:08:45.511 "name": "35a9460c-14f5-495c-909c-76262c7645fd", 00:08:45.512 "aliases": [ 00:08:45.512 "lvs/lvol" 00:08:45.512 ], 00:08:45.512 "product_name": "Logical Volume", 00:08:45.512 "block_size": 4096, 00:08:45.512 "num_blocks": 38912, 00:08:45.512 "uuid": "35a9460c-14f5-495c-909c-76262c7645fd", 00:08:45.512 "assigned_rate_limits": { 00:08:45.512 "rw_ios_per_sec": 0, 00:08:45.512 "rw_mbytes_per_sec": 0, 00:08:45.512 "r_mbytes_per_sec": 0, 00:08:45.512 "w_mbytes_per_sec": 0 00:08:45.512 }, 00:08:45.512 "claimed": false, 00:08:45.512 "zoned": false, 00:08:45.512 "supported_io_types": { 00:08:45.512 "read": true, 00:08:45.512 "write": true, 00:08:45.512 "unmap": true, 00:08:45.512 "flush": false, 00:08:45.512 "reset": true, 00:08:45.512 "nvme_admin": false, 00:08:45.512 "nvme_io": false, 00:08:45.512 "nvme_io_md": false, 00:08:45.512 "write_zeroes": true, 00:08:45.512 "zcopy": false, 00:08:45.512 "get_zone_info": false, 00:08:45.512 "zone_management": false, 00:08:45.512 "zone_append": false, 00:08:45.512 "compare": false, 00:08:45.512 "compare_and_write": false, 00:08:45.512 "abort": false, 00:08:45.512 "seek_hole": true, 00:08:45.512 "seek_data": true, 00:08:45.512 "copy": false, 00:08:45.512 "nvme_iov_md": false 00:08:45.512 }, 00:08:45.512 "driver_specific": { 00:08:45.512 "lvol": { 00:08:45.512 "lvol_store_uuid": "9b900605-d374-4781-a794-e045a0a509d4", 00:08:45.512 "base_bdev": "aio_bdev", 00:08:45.512 "thin_provision": false, 00:08:45.512 "num_allocated_clusters": 38, 00:08:45.512 "snapshot": false, 00:08:45.512 "clone": false, 00:08:45.512 "esnap_clone": false 00:08:45.512 } 00:08:45.512 } 00:08:45.512 } 00:08:45.512 ] 00:08:45.512 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:45.512 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:45.512 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:45.771 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:45.771 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:45.771 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:46.030 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:46.030 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 35a9460c-14f5-495c-909c-76262c7645fd 00:08:46.030 05:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b900605-d374-4781-a794-e045a0a509d4 00:08:46.289 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.548 00:08:46.548 real 0m16.838s 00:08:46.548 user 0m43.682s 00:08:46.548 sys 0m3.838s 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:46.548 ************************************ 00:08:46.548 END TEST lvs_grow_dirty 00:08:46.548 ************************************ 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:46.548 nvmf_trace.0 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.548 rmmod nvme_tcp 00:08:46.548 rmmod nvme_fabrics 00:08:46.548 rmmod nvme_keyring 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1620714 ']' 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1620714 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1620714 ']' 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1620714 00:08:46.548 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1620714 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1620714' 00:08:46.807 killing process with pid 1620714 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1620714 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1620714 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.807 05:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:49.346 00:08:49.346 real 0m41.869s 00:08:49.346 user 1m4.522s 00:08:49.346 sys 0m10.360s 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.346 ************************************ 00:08:49.346 END TEST nvmf_lvs_grow 00:08:49.346 ************************************ 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.346 ************************************ 00:08:49.346 START TEST nvmf_bdev_io_wait 00:08:49.346 ************************************ 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:49.346 * Looking for test storage... 00:08:49.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.346 05:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:49.346 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:49.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.347 --rc genhtml_branch_coverage=1 00:08:49.347 --rc genhtml_function_coverage=1 00:08:49.347 --rc genhtml_legend=1 00:08:49.347 --rc geninfo_all_blocks=1 00:08:49.347 --rc geninfo_unexecuted_blocks=1 00:08:49.347 00:08:49.347 ' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.347 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:49.348 05:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:55.922 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:55.922 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.922 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:55.923 Found net devices under 0000:86:00.0: cvl_0_0 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:55.923 Found net devices under 0000:86:00.1: cvl_0_1 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.923 05:30:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:08:55.923 00:08:55.923 --- 10.0.0.2 ping statistics --- 00:08:55.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.923 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:55.923 00:08:55.923 --- 10.0.0.1 ping statistics --- 00:08:55.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.923 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1624927 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1624927 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1624927 ']' 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.923 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.923 [2024-11-27 05:30:43.137826] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:55.923 [2024-11-27 05:30:43.137879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.923 [2024-11-27 05:30:43.215773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.923 [2024-11-27 05:30:43.259876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.924 [2024-11-27 05:30:43.259915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.924 [2024-11-27 05:30:43.259925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.924 [2024-11-27 05:30:43.259932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.924 [2024-11-27 05:30:43.259938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.924 [2024-11-27 05:30:43.261391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.924 [2024-11-27 05:30:43.261499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.924 [2024-11-27 05:30:43.261608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.924 [2024-11-27 05:30:43.261609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 [2024-11-27 05:30:43.397691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 Malloc0 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.924 [2024-11-27 05:30:43.444989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1624952 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1624954 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.924 { 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme$subsystem", 00:08:55.924 "trtype": "$TEST_TRANSPORT", 00:08:55.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "$NVMF_PORT", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.924 "hdgst": ${hdgst:-false}, 00:08:55.924 "ddgst": ${ddgst:-false} 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.924 } 00:08:55.924 EOF 00:08:55.924 )") 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1624956 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.924 { 00:08:55.924 "params": { 00:08:55.924 "name": "Nvme$subsystem", 00:08:55.924 "trtype": "$TEST_TRANSPORT", 00:08:55.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.924 "adrfam": "ipv4", 00:08:55.924 "trsvcid": "$NVMF_PORT", 00:08:55.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.924 "hdgst": ${hdgst:-false}, 00:08:55.924 "ddgst": ${ddgst:-false} 00:08:55.924 }, 00:08:55.924 "method": "bdev_nvme_attach_controller" 00:08:55.924 } 00:08:55.924 EOF 00:08:55.924 )") 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1624959 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:55.924 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.925 { 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme$subsystem", 00:08:55.925 "trtype": "$TEST_TRANSPORT", 00:08:55.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "$NVMF_PORT", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.925 "hdgst": ${hdgst:-false}, 00:08:55.925 "ddgst": ${ddgst:-false} 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 } 00:08:55.925 EOF 00:08:55.925 )") 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:55.925 { 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme$subsystem", 00:08:55.925 "trtype": "$TEST_TRANSPORT", 00:08:55.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "$NVMF_PORT", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.925 "hdgst": ${hdgst:-false}, 00:08:55.925 "ddgst": ${ddgst:-false} 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 } 00:08:55.925 EOF 00:08:55.925 )") 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1624952 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme1", 00:08:55.925 "trtype": "tcp", 00:08:55.925 "traddr": "10.0.0.2", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "4420", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.925 "hdgst": false, 00:08:55.925 "ddgst": false 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 }' 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme1", 00:08:55.925 "trtype": "tcp", 00:08:55.925 "traddr": "10.0.0.2", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "4420", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.925 "hdgst": false, 00:08:55.925 "ddgst": false 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 }' 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme1", 00:08:55.925 "trtype": "tcp", 00:08:55.925 "traddr": "10.0.0.2", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "4420", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.925 "hdgst": false, 00:08:55.925 "ddgst": false 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 }' 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:55.925 05:30:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:55.925 "params": { 00:08:55.925 "name": "Nvme1", 00:08:55.925 "trtype": "tcp", 00:08:55.925 "traddr": "10.0.0.2", 00:08:55.925 "adrfam": "ipv4", 00:08:55.925 "trsvcid": "4420", 00:08:55.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.925 "hdgst": false, 00:08:55.925 "ddgst": false 00:08:55.925 }, 00:08:55.925 "method": "bdev_nvme_attach_controller" 00:08:55.925 }' 00:08:55.925 [2024-11-27 05:30:43.496236] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:55.925 [2024-11-27 05:30:43.496237] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:55.925 [2024-11-27 05:30:43.496288] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-27 05:30:43.496288] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:55.925 --proc-type=auto ] 00:08:55.925 [2024-11-27 05:30:43.496397] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:55.925 [2024-11-27 05:30:43.496432] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:55.925 [2024-11-27 05:30:43.500098] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:08:55.925 [2024-11-27 05:30:43.500142] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:55.925 [2024-11-27 05:30:43.682655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.925 [2024-11-27 05:30:43.725434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:55.925 [2024-11-27 05:30:43.780980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.925 [2024-11-27 05:30:43.823275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:55.925 [2024-11-27 05:30:43.873153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.925 [2024-11-27 05:30:43.917165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.925 [2024-11-27 05:30:43.919435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:56.185 [2024-11-27 05:30:43.959906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:56.185 Running I/O for 1 seconds... 00:08:56.185 Running I/O for 1 seconds... 00:08:56.185 Running I/O for 1 seconds... 00:08:56.444 Running I/O for 1 seconds... 00:08:57.381 11988.00 IOPS, 46.83 MiB/s 00:08:57.381 Latency(us) 00:08:57.381 [2024-11-27T04:30:45.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.381 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:57.381 Nvme1n1 : 1.01 12031.24 47.00 0.00 0.00 10599.36 6241.52 16602.45 00:08:57.381 [2024-11-27T04:30:45.385Z] =================================================================================================================== 00:08:57.381 [2024-11-27T04:30:45.385Z] Total : 12031.24 47.00 0.00 0.00 10599.36 6241.52 16602.45 00:08:57.381 9965.00 IOPS, 38.93 MiB/s 00:08:57.381 Latency(us) 00:08:57.381 [2024-11-27T04:30:45.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.381 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:57.381 Nvme1n1 : 1.01 10036.19 39.20 0.00 0.00 12710.20 5086.84 22719.15 00:08:57.381 [2024-11-27T04:30:45.385Z] =================================================================================================================== 00:08:57.381 [2024-11-27T04:30:45.385Z] Total : 10036.19 39.20 0.00 0.00 12710.20 5086.84 22719.15 00:08:57.381 244160.00 IOPS, 953.75 MiB/s 00:08:57.381 Latency(us) 00:08:57.381 [2024-11-27T04:30:45.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.382 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:57.382 Nvme1n1 : 1.00 243784.77 952.28 0.00 0.00 522.80 234.06 1536.98 00:08:57.382 [2024-11-27T04:30:45.386Z] =================================================================================================================== 00:08:57.382 [2024-11-27T04:30:45.386Z] Total : 243784.77 952.28 0.00 0.00 522.80 234.06 1536.98 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1624954 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1624956 00:08:57.382 11773.00 IOPS, 45.99 MiB/s 00:08:57.382 Latency(us) 00:08:57.382 [2024-11-27T04:30:45.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.382 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:57.382 Nvme1n1 : 1.00 11854.73 46.31 0.00 0.00 10771.01 3167.57 22469.49 00:08:57.382 [2024-11-27T04:30:45.386Z] =================================================================================================================== 00:08:57.382 [2024-11-27T04:30:45.386Z] Total : 11854.73 46.31 0.00 0.00 10771.01 3167.57 22469.49 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1624959 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.382 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.382 rmmod nvme_tcp 00:08:57.382 rmmod nvme_fabrics 00:08:57.382 rmmod nvme_keyring 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1624927 ']' 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1624927 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1624927 ']' 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1624927 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624927 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624927' 00:08:57.641 killing process with pid 1624927 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1624927 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1624927 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.641 05:30:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.182 00:09:00.182 real 0m10.776s 00:09:00.182 user 0m16.125s 00:09:00.182 sys 0m6.161s 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.182 ************************************ 00:09:00.182 END TEST nvmf_bdev_io_wait 00:09:00.182 ************************************ 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.182 ************************************ 00:09:00.182 START TEST nvmf_queue_depth 00:09:00.182 ************************************ 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.182 * Looking for test storage... 00:09:00.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.182 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.183 --rc genhtml_branch_coverage=1 00:09:00.183 --rc genhtml_function_coverage=1 00:09:00.183 --rc genhtml_legend=1 00:09:00.183 --rc geninfo_all_blocks=1 00:09:00.183 --rc geninfo_unexecuted_blocks=1 00:09:00.183 00:09:00.183 ' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.183 --rc genhtml_branch_coverage=1 00:09:00.183 --rc genhtml_function_coverage=1 00:09:00.183 --rc genhtml_legend=1 00:09:00.183 --rc geninfo_all_blocks=1 00:09:00.183 --rc geninfo_unexecuted_blocks=1 00:09:00.183 00:09:00.183 ' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.183 --rc genhtml_branch_coverage=1 00:09:00.183 --rc genhtml_function_coverage=1 00:09:00.183 --rc genhtml_legend=1 00:09:00.183 --rc geninfo_all_blocks=1 00:09:00.183 --rc geninfo_unexecuted_blocks=1 00:09:00.183 00:09:00.183 ' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:00.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.183 --rc genhtml_branch_coverage=1 00:09:00.183 --rc genhtml_function_coverage=1 00:09:00.183 --rc genhtml_legend=1 00:09:00.183 --rc geninfo_all_blocks=1 00:09:00.183 --rc geninfo_unexecuted_blocks=1 00:09:00.183 00:09:00.183 ' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.183 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.184 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.184 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:00.184 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:00.184 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:00.184 05:30:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.760 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:06.761 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:06.761 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:06.761 Found net devices under 0000:86:00.0: cvl_0_0 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:06.761 Found net devices under 0000:86:00.1: cvl_0_1 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:09:06.761 00:09:06.761 --- 10.0.0.2 ping statistics --- 00:09:06.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.761 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:09:06.761 00:09:06.761 --- 10.0.0.1 ping statistics --- 00:09:06.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.761 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1628960 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1628960 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1628960 ']' 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.761 05:30:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.761 [2024-11-27 05:30:54.046011] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:06.761 [2024-11-27 05:30:54.046064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.761 [2024-11-27 05:30:54.127849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.761 [2024-11-27 05:30:54.169294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.761 [2024-11-27 05:30:54.169327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.761 [2024-11-27 05:30:54.169334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.762 [2024-11-27 05:30:54.169340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.762 [2024-11-27 05:30:54.169345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.762 [2024-11-27 05:30:54.169882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 [2024-11-27 05:30:54.301512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 Malloc0 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 [2024-11-27 05:30:54.347512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1628992 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1628992 /var/tmp/bdevperf.sock 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1628992 ']' 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:06.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 [2024-11-27 05:30:54.396043] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:06.762 [2024-11-27 05:30:54.396083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628992 ] 00:09:06.762 [2024-11-27 05:30:54.469146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.762 [2024-11-27 05:30:54.511024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:06.762 NVMe0n1 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.762 05:30:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:07.021 Running I/O for 10 seconds... 00:09:08.897 12246.00 IOPS, 47.84 MiB/s [2024-11-27T04:30:57.837Z] 12277.00 IOPS, 47.96 MiB/s [2024-11-27T04:30:59.214Z] 12311.33 IOPS, 48.09 MiB/s [2024-11-27T04:31:00.151Z] 12410.25 IOPS, 48.48 MiB/s [2024-11-27T04:31:01.088Z] 12457.80 IOPS, 48.66 MiB/s [2024-11-27T04:31:02.112Z] 12436.83 IOPS, 48.58 MiB/s [2024-11-27T04:31:03.146Z] 12425.57 IOPS, 48.54 MiB/s [2024-11-27T04:31:04.085Z] 12486.50 IOPS, 48.78 MiB/s [2024-11-27T04:31:05.022Z] 12486.89 IOPS, 48.78 MiB/s [2024-11-27T04:31:05.022Z] 12477.20 IOPS, 48.74 MiB/s 00:09:17.018 Latency(us) 00:09:17.018 [2024-11-27T04:31:05.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.018 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:17.018 Verification LBA range: start 0x0 length 0x4000 00:09:17.018 NVMe0n1 : 10.05 12517.59 48.90 0.00 0.00 81539.03 9362.29 51180.50 00:09:17.018 [2024-11-27T04:31:05.022Z] =================================================================================================================== 00:09:17.018 [2024-11-27T04:31:05.022Z] Total : 12517.59 48.90 0.00 0.00 81539.03 9362.29 51180.50 00:09:17.018 { 00:09:17.018 "results": [ 00:09:17.018 { 00:09:17.018 "job": "NVMe0n1", 00:09:17.018 "core_mask": "0x1", 00:09:17.018 "workload": "verify", 00:09:17.018 "status": "finished", 00:09:17.018 "verify_range": { 00:09:17.018 "start": 0, 00:09:17.018 "length": 16384 00:09:17.018 }, 00:09:17.018 "queue_depth": 1024, 00:09:17.018 "io_size": 4096, 00:09:17.018 "runtime": 10.048976, 00:09:17.018 "iops": 12517.593832446211, 00:09:17.018 "mibps": 48.89685090799301, 00:09:17.018 "io_failed": 0, 00:09:17.018 "io_timeout": 0, 00:09:17.018 "avg_latency_us": 81539.02981920216, 00:09:17.018 "min_latency_us": 9362.285714285714, 00:09:17.018 "max_latency_us": 51180.49523809524 00:09:17.018 } 00:09:17.018 ], 00:09:17.018 "core_count": 1 00:09:17.018 } 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1628992 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1628992 ']' 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1628992 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1628992 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1628992' 00:09:17.018 killing process with pid 1628992 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1628992 00:09:17.018 Received shutdown signal, test time was about 10.000000 seconds 00:09:17.018 00:09:17.018 Latency(us) 00:09:17.018 [2024-11-27T04:31:05.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:17.018 [2024-11-27T04:31:05.022Z] =================================================================================================================== 00:09:17.018 [2024-11-27T04:31:05.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:17.018 05:31:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1628992 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.277 rmmod nvme_tcp 00:09:17.277 rmmod nvme_fabrics 00:09:17.277 rmmod nvme_keyring 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1628960 ']' 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1628960 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1628960 ']' 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1628960 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1628960 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1628960' 00:09:17.277 killing process with pid 1628960 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1628960 00:09:17.277 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1628960 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.536 05:31:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.439 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.699 00:09:19.699 real 0m19.690s 00:09:19.699 user 0m22.844s 00:09:19.699 sys 0m6.169s 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.699 ************************************ 00:09:19.699 END TEST nvmf_queue_depth 00:09:19.699 ************************************ 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.699 ************************************ 00:09:19.699 START TEST nvmf_target_multipath 00:09:19.699 ************************************ 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:19.699 * Looking for test storage... 00:09:19.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.699 --rc genhtml_branch_coverage=1 00:09:19.699 --rc genhtml_function_coverage=1 00:09:19.699 --rc genhtml_legend=1 00:09:19.699 --rc geninfo_all_blocks=1 00:09:19.699 --rc geninfo_unexecuted_blocks=1 00:09:19.699 00:09:19.699 ' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.699 --rc genhtml_branch_coverage=1 00:09:19.699 --rc genhtml_function_coverage=1 00:09:19.699 --rc genhtml_legend=1 00:09:19.699 --rc geninfo_all_blocks=1 00:09:19.699 --rc geninfo_unexecuted_blocks=1 00:09:19.699 00:09:19.699 ' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.699 --rc genhtml_branch_coverage=1 00:09:19.699 --rc genhtml_function_coverage=1 00:09:19.699 --rc genhtml_legend=1 00:09:19.699 --rc geninfo_all_blocks=1 00:09:19.699 --rc geninfo_unexecuted_blocks=1 00:09:19.699 00:09:19.699 ' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.699 --rc genhtml_branch_coverage=1 00:09:19.699 --rc genhtml_function_coverage=1 00:09:19.699 --rc genhtml_legend=1 00:09:19.699 --rc geninfo_all_blocks=1 00:09:19.699 --rc geninfo_unexecuted_blocks=1 00:09:19.699 00:09:19.699 ' 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.699 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.959 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.960 05:31:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.533 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:26.534 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:26.534 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:26.534 Found net devices under 0000:86:00.0: cvl_0_0 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:26.534 Found net devices under 0000:86:00.1: cvl_0_1 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:09:26.534 00:09:26.534 --- 10.0.0.2 ping statistics --- 00:09:26.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.534 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:09:26.534 00:09:26.534 --- 10.0.0.1 ping statistics --- 00:09:26.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.534 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:26.534 only one NIC for nvmf test 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.534 rmmod nvme_tcp 00:09:26.534 rmmod nvme_fabrics 00:09:26.534 rmmod nvme_keyring 00:09:26.534 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.535 05:31:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:27.915 00:09:27.915 real 0m8.356s 00:09:27.915 user 0m1.871s 00:09:27.915 sys 0m4.496s 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:27.915 ************************************ 00:09:27.915 END TEST nvmf_target_multipath 00:09:27.915 ************************************ 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.915 05:31:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.175 ************************************ 00:09:28.175 START TEST nvmf_zcopy 00:09:28.175 ************************************ 00:09:28.175 05:31:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.175 * Looking for test storage... 00:09:28.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.175 --rc genhtml_branch_coverage=1 00:09:28.175 --rc genhtml_function_coverage=1 00:09:28.175 --rc genhtml_legend=1 00:09:28.175 --rc geninfo_all_blocks=1 00:09:28.175 --rc geninfo_unexecuted_blocks=1 00:09:28.175 00:09:28.175 ' 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.175 --rc genhtml_branch_coverage=1 00:09:28.175 --rc genhtml_function_coverage=1 00:09:28.175 --rc genhtml_legend=1 00:09:28.175 --rc geninfo_all_blocks=1 00:09:28.175 --rc geninfo_unexecuted_blocks=1 00:09:28.175 00:09:28.175 ' 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.175 --rc genhtml_branch_coverage=1 00:09:28.175 --rc genhtml_function_coverage=1 00:09:28.175 --rc genhtml_legend=1 00:09:28.175 --rc geninfo_all_blocks=1 00:09:28.175 --rc geninfo_unexecuted_blocks=1 00:09:28.175 00:09:28.175 ' 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.175 --rc genhtml_branch_coverage=1 00:09:28.175 --rc genhtml_function_coverage=1 00:09:28.175 --rc genhtml_legend=1 00:09:28.175 --rc geninfo_all_blocks=1 00:09:28.175 --rc geninfo_unexecuted_blocks=1 00:09:28.175 00:09:28.175 ' 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.175 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.176 05:31:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:34.749 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:34.749 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:34.750 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:34.750 Found net devices under 0000:86:00.0: cvl_0_0 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:34.750 Found net devices under 0000:86:00.1: cvl_0_1 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:34.750 05:31:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:34.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:09:34.750 00:09:34.750 --- 10.0.0.2 ping statistics --- 00:09:34.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.750 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:34.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:09:34.750 00:09:34.750 --- 10.0.0.1 ping statistics --- 00:09:34.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.750 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1637901 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1637901 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1637901 ']' 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.750 05:31:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.750 [2024-11-27 05:31:22.181872] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:34.750 [2024-11-27 05:31:22.181918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.750 [2024-11-27 05:31:22.259522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.750 [2024-11-27 05:31:22.301068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.750 [2024-11-27 05:31:22.301101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.750 [2024-11-27 05:31:22.301108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.750 [2024-11-27 05:31:22.301114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.750 [2024-11-27 05:31:22.301119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.750 [2024-11-27 05:31:22.301678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 [2024-11-27 05:31:23.060183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 [2024-11-27 05:31:23.080395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 malloc0 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:35.320 { 00:09:35.320 "params": { 00:09:35.320 "name": "Nvme$subsystem", 00:09:35.320 "trtype": "$TEST_TRANSPORT", 00:09:35.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.320 "adrfam": "ipv4", 00:09:35.320 "trsvcid": "$NVMF_PORT", 00:09:35.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.320 "hdgst": ${hdgst:-false}, 00:09:35.320 "ddgst": ${ddgst:-false} 00:09:35.320 }, 00:09:35.320 "method": "bdev_nvme_attach_controller" 00:09:35.320 } 00:09:35.320 EOF 00:09:35.320 )") 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:35.320 05:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:35.320 "params": { 00:09:35.320 "name": "Nvme1", 00:09:35.320 "trtype": "tcp", 00:09:35.320 "traddr": "10.0.0.2", 00:09:35.320 "adrfam": "ipv4", 00:09:35.320 "trsvcid": "4420", 00:09:35.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.320 "hdgst": false, 00:09:35.320 "ddgst": false 00:09:35.320 }, 00:09:35.320 "method": "bdev_nvme_attach_controller" 00:09:35.320 }' 00:09:35.320 [2024-11-27 05:31:23.164175] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:35.320 [2024-11-27 05:31:23.164216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638061 ] 00:09:35.320 [2024-11-27 05:31:23.239492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.321 [2024-11-27 05:31:23.280100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.888 Running I/O for 10 seconds... 00:09:37.759 8681.00 IOPS, 67.82 MiB/s [2024-11-27T04:31:26.699Z] 8772.50 IOPS, 68.54 MiB/s [2024-11-27T04:31:28.076Z] 8783.00 IOPS, 68.62 MiB/s [2024-11-27T04:31:29.013Z] 8787.50 IOPS, 68.65 MiB/s [2024-11-27T04:31:29.948Z] 8802.20 IOPS, 68.77 MiB/s [2024-11-27T04:31:30.884Z] 8802.50 IOPS, 68.77 MiB/s [2024-11-27T04:31:31.820Z] 8784.71 IOPS, 68.63 MiB/s [2024-11-27T04:31:32.758Z] 8787.25 IOPS, 68.65 MiB/s [2024-11-27T04:31:33.696Z] 8790.22 IOPS, 68.67 MiB/s [2024-11-27T04:31:33.955Z] 8795.10 IOPS, 68.71 MiB/s 00:09:45.951 Latency(us) 00:09:45.951 [2024-11-27T04:31:33.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.951 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:45.951 Verification LBA range: start 0x0 length 0x1000 00:09:45.951 Nvme1n1 : 10.05 8763.19 68.46 0.00 0.00 14505.57 2356.18 41443.72 00:09:45.951 [2024-11-27T04:31:33.955Z] =================================================================================================================== 00:09:45.951 [2024-11-27T04:31:33.955Z] Total : 8763.19 68.46 0.00 0.00 14505.57 2356.18 41443.72 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1639766 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:45.951 { 00:09:45.951 "params": { 00:09:45.951 "name": "Nvme$subsystem", 00:09:45.951 "trtype": "$TEST_TRANSPORT", 00:09:45.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.951 "adrfam": "ipv4", 00:09:45.951 "trsvcid": "$NVMF_PORT", 00:09:45.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.951 "hdgst": ${hdgst:-false}, 00:09:45.951 "ddgst": ${ddgst:-false} 00:09:45.951 }, 00:09:45.951 "method": "bdev_nvme_attach_controller" 00:09:45.951 } 00:09:45.951 EOF 00:09:45.951 )") 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:45.951 [2024-11-27 05:31:33.882585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.951 [2024-11-27 05:31:33.882621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:45.951 05:31:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:45.951 "params": { 00:09:45.951 "name": "Nvme1", 00:09:45.951 "trtype": "tcp", 00:09:45.951 "traddr": "10.0.0.2", 00:09:45.951 "adrfam": "ipv4", 00:09:45.951 "trsvcid": "4420", 00:09:45.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.951 "hdgst": false, 00:09:45.951 "ddgst": false 00:09:45.951 }, 00:09:45.951 "method": "bdev_nvme_attach_controller" 00:09:45.951 }' 00:09:45.951 [2024-11-27 05:31:33.894580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.951 [2024-11-27 05:31:33.894593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.951 [2024-11-27 05:31:33.906606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.951 [2024-11-27 05:31:33.906616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.951 [2024-11-27 05:31:33.918638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.951 [2024-11-27 05:31:33.918648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.951 [2024-11-27 05:31:33.922381] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:09:45.951 [2024-11-27 05:31:33.922425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639766 ] 00:09:45.951 [2024-11-27 05:31:33.930674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.951 [2024-11-27 05:31:33.930685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.951 [2024-11-27 05:31:33.942708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.951 [2024-11-27 05:31:33.942720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:33.954732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:33.954742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:33.966764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:33.966777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:33.978797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:33.978806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:33.990829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:33.990840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:33.996364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.211 [2024-11-27 05:31:34.002862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.002873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.014893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.014907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.026924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.026935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.038057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.211 [2024-11-27 05:31:34.038957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.038969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.051000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.051016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.063030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.063049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.075057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.075070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.087085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.087098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.099122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.099135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.111150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.111163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.123181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.123193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.135240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.135262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.147254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.147270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.159287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.159303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.171319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.171333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.183349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.183364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.195381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.195393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.211 [2024-11-27 05:31:34.207415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.211 [2024-11-27 05:31:34.207425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.219454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.219472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.231483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.231494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.243515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.243526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.255553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.255567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.267580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.267591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.279614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.279624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.291646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.291656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.303686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.303697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.315741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.315760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 Running I/O for 5 seconds... 00:09:46.471 [2024-11-27 05:31:34.327748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.327758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.343313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.343333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.357550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.357569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.371114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.471 [2024-11-27 05:31:34.371132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.471 [2024-11-27 05:31:34.385416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.472 [2024-11-27 05:31:34.385434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.472 [2024-11-27 05:31:34.399479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.472 [2024-11-27 05:31:34.399498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.472 [2024-11-27 05:31:34.413501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.472 [2024-11-27 05:31:34.413519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.472 [2024-11-27 05:31:34.427353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.472 [2024-11-27 05:31:34.427375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.472 [2024-11-27 05:31:34.441410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.472 [2024-11-27 05:31:34.441429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.472 [2024-11-27 05:31:34.455028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.472 [2024-11-27 05:31:34.455046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.472 [2024-11-27 05:31:34.468893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.472 [2024-11-27 05:31:34.468912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.482681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.482700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.496521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.496540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.509871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.509889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.523810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.523830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.537595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.537613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.551210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.551228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.564652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.564683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.578272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.578290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.591911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.591929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.606040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.606059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.619778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.619796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.633647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.633667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.648020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.648038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.661730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.661748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.675629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.675647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.689346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.689368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.703166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.703184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.716884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.716902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.732 [2024-11-27 05:31:34.730874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.732 [2024-11-27 05:31:34.730892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.991 [2024-11-27 05:31:34.745058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.991 [2024-11-27 05:31:34.745076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.755869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.755886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.770181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.770198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.783796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.783814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.797645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.797663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.811439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.811457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.825229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.825246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.839242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.839259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.853160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.853178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.866959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.866981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.880483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.880500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.893991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.894009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.907654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.907678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.921783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.921801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.935915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.935933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.949846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.949869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.963584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.963602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.977276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.977296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.992 [2024-11-27 05:31:34.991094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.992 [2024-11-27 05:31:34.991112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.251 [2024-11-27 05:31:35.004818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.251 [2024-11-27 05:31:35.004837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.251 [2024-11-27 05:31:35.018543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.251 [2024-11-27 05:31:35.018561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.251 [2024-11-27 05:31:35.031718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.251 [2024-11-27 05:31:35.031736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.251 [2024-11-27 05:31:35.045623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.251 [2024-11-27 05:31:35.045641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.251 [2024-11-27 05:31:35.059214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.251 [2024-11-27 05:31:35.059231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.073289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.073306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.087100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.087118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.101282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.101300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.115269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.115287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.129534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.129552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.140945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.140965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.155023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.155044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.169005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.169024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.183172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.183191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.197018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.197036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.210906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.210924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.224913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.224942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.238631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.238648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.252 [2024-11-27 05:31:35.252846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.252 [2024-11-27 05:31:35.252864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.266790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.266809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.280889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.280906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.294587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.294605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.308148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.308167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.322141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.322159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 16779.00 IOPS, 131.09 MiB/s [2024-11-27T04:31:35.516Z] [2024-11-27 05:31:35.335719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.335737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.349928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.349945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.363276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.363294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.377129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.377148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.390777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.390798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.404567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.404587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.418159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.418178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.432214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.432233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.446441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.446460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.460651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.460676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.474684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.474702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.488270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.488292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.512 [2024-11-27 05:31:35.501841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.512 [2024-11-27 05:31:35.501860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.771 [2024-11-27 05:31:35.515988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.771 [2024-11-27 05:31:35.516009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.529909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.529929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.543704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.543724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.557259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.557278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.571624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.571642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.582984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.583002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.596967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.596986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.610687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.610721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.624255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.624274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.638363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.638381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.652870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.652889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.663972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.663990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.678258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.678277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.692055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.692073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.706137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.706156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.720096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.720119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.734122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.734141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.747947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.747965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.772 [2024-11-27 05:31:35.761490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.772 [2024-11-27 05:31:35.761507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.031 [2024-11-27 05:31:35.775071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.031 [2024-11-27 05:31:35.775090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.031 [2024-11-27 05:31:35.788772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.031 [2024-11-27 05:31:35.788790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.031 [2024-11-27 05:31:35.802388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.031 [2024-11-27 05:31:35.802406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.031 [2024-11-27 05:31:35.816474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.031 [2024-11-27 05:31:35.816492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.830449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.830467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.844084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.844101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.857928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.857946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.871753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.871771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.885312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.885330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.899344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.899362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.912959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.912976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.926715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.926733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.940742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.940760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.954582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.954600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.968408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.968426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.982246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.982268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:35.995862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:35.995880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:36.009590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:36.009608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.032 [2024-11-27 05:31:36.023519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.032 [2024-11-27 05:31:36.023537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.037319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.037339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.051394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.051412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.065171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.065192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.078992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.079011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.093495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.093513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.108972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.108990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.123238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.123256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.137142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.137160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.151058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.151076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.164973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.164993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.178825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.178845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.192467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.192485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.206097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.206114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.220397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.220414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.234231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.234249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.248631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.248654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.259705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.259724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.273893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.273911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.291 [2024-11-27 05:31:36.287604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.291 [2024-11-27 05:31:36.287622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.301578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.301597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.315550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.315568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.329427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.329445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 16889.50 IOPS, 131.95 MiB/s [2024-11-27T04:31:36.555Z] [2024-11-27 05:31:36.343558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.343575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.357507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.357529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.371755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.371773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.385352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.385370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.399394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.399412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.413311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.413330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.427005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.427022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.441067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.441085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.455082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.455100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.469060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.469078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.483117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.483138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.496570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.496589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.510210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.510229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.524093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.524111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.537863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.537881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.551 [2024-11-27 05:31:36.551346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.551 [2024-11-27 05:31:36.551364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.565271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.565290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.579247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.579265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.593140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.593158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.606971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.606989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.620692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.620711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.634611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.634629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.648702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.648720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.662271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.662291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.675878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.675896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.689653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.689677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.703395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.703412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.717343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.717362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.731320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.731338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.745162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.745181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.759391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.759411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.773042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.773061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.787102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.787122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.810 [2024-11-27 05:31:36.800656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.810 [2024-11-27 05:31:36.800682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.814903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.814923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.825649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.825667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.840132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.840150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.853738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.853757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.867360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.867380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.881287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.881306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.892692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.892710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.906856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.906875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.920731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.920750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.934890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.934909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.948736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.948754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.962679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.962697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.976180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.976200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:36.990126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:36.990145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:37.004358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:37.004376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:37.014827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:37.014845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:37.028970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:37.028989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:37.042887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:37.042908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:37.056686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:37.056705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.070 [2024-11-27 05:31:37.070984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.070 [2024-11-27 05:31:37.071002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.329 [2024-11-27 05:31:37.086852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.086871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.100932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.100951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.114648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.114667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.128310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.128329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.142536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.142555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.156211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.156229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.170275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.170293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.183848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.183867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.198010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.198029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.211974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.211993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.225579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.225597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.239402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.239423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.253356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.253375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.267479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.267497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.278160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.278178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.292256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.292274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.305935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.305953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.330 [2024-11-27 05:31:37.319562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.330 [2024-11-27 05:31:37.319580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 16877.00 IOPS, 131.85 MiB/s [2024-11-27T04:31:37.594Z] [2024-11-27 05:31:37.333015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.333035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.347232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.347250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.360743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.360761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.374553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.374570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.388348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.388367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.402811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.402829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.416719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.416737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.430293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.430311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.444019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.444037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.457722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.457739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.471406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.471424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.485330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.485350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.499524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.499543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.513091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.513109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.527022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.527040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.541139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.541161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.554904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.554922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.568942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.568960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.590 [2024-11-27 05:31:37.582493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.590 [2024-11-27 05:31:37.582511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.596153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.596172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.610170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.610188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.624007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.624026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.637474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.637491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.651521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.651539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.664874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.664891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.678598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.678615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.692584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.692602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.706444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.706462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.720434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.720452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.734233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.734250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.748522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.748540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.762126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.762144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.776132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.776150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.789502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.789519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.803539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.803561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.817196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.817215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.830897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.830925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.849 [2024-11-27 05:31:37.844850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.849 [2024-11-27 05:31:37.844868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.859002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.859020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.872633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.872651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.886455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.886473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.900137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.900155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.914147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.914182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.928087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.928105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.942409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.942427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.956130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.956148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.970187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.970204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.983882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.983899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:37.997657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:37.997681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.011137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.011156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.025079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.025097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.039073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.039090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.053240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.053258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.067254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.067275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.081377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.081395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.095243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.095261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.108 [2024-11-27 05:31:38.108906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.108 [2024-11-27 05:31:38.108924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.123573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.123591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.134044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.134062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.148179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.148198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.161725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.161746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.175790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.175808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.189885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.189903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.203204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.203223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.217412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.217431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.231161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.231180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.244524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.244544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.258460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.258481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.272289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.272307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.286149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.286168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.299505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.299524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.313861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.313879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.327941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.327959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 16889.25 IOPS, 131.95 MiB/s [2024-11-27T04:31:38.373Z] [2024-11-27 05:31:38.341686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.341704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.356123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.356142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.369 [2024-11-27 05:31:38.367064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.369 [2024-11-27 05:31:38.367083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.381200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.381219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.394847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.394866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.408943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.408961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.419799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.419818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.433869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.433887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.447775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.447793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.461389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.461407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.475470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.475489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.489692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.489726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.504973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.504991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.518629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.518649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.532549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.532567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.546097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.546115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.560208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.560226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.574091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.574109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.587985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.588004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.601812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.601831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.615104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.615123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.629 [2024-11-27 05:31:38.629267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.629 [2024-11-27 05:31:38.629285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.642788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.642807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.656977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.656996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.670667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.670691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.684544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.684562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.698179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.698197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.712291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.712308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.725990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.726007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.739862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.739881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.753622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.888 [2024-11-27 05:31:38.753640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.888 [2024-11-27 05:31:38.767773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.767791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.781533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.781550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.795650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.795673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.809624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.809641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.823693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.823710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.838062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.838085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.849367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.849385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.863704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.863722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.877537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.877554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.889 [2024-11-27 05:31:38.891331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.889 [2024-11-27 05:31:38.891349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:38.905746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:38.905764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:38.919600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:38.919618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:38.933466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:38.933483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:38.947419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:38.947437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:38.960775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:38.960793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:38.974581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:38.974599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:38.988710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:38.988728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.002368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.002386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.015921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.015939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.029948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.029966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.043808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.043826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.057837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.057856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.071535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.071553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.085449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.085467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.099447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.099473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.113345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.113363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.127226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.127244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.148 [2024-11-27 05:31:39.141350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.148 [2024-11-27 05:31:39.141368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.408 [2024-11-27 05:31:39.155378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.408 [2024-11-27 05:31:39.155397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.408 [2024-11-27 05:31:39.169286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.408 [2024-11-27 05:31:39.169304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.408 [2024-11-27 05:31:39.183491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.408 [2024-11-27 05:31:39.183509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.197295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.197314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.211323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.211341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.225274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.225292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.239406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.239423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.252640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.252660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.266391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.266410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.279877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.279894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.293343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.293360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.307110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.307129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.321125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.321142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.334902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.334931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 16883.20 IOPS, 131.90 MiB/s 00:09:51.409 Latency(us) 00:09:51.409 [2024-11-27T04:31:39.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.409 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:51.409 Nvme1n1 : 5.01 16890.18 131.95 0.00 0.00 7572.29 3120.76 17101.78 00:09:51.409 [2024-11-27T04:31:39.413Z] =================================================================================================================== 00:09:51.409 [2024-11-27T04:31:39.413Z] Total : 16890.18 131.95 0.00 0.00 7572.29 3120.76 17101.78 00:09:51.409 [2024-11-27 05:31:39.344966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.344983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.357008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.357024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.369035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.369050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.381066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.381085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.393094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.393109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.409 [2024-11-27 05:31:39.405126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.409 [2024-11-27 05:31:39.405140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.417155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.417170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.429186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.429200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.441216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.441230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.453245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.453258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.465279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.465289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.477314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.477326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.489340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.489350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 [2024-11-27 05:31:39.501371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.670 [2024-11-27 05:31:39.501380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1639766) - No such process 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1639766 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 delay0 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.670 05:31:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:51.670 [2024-11-27 05:31:39.645870] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:58.239 [2024-11-27 05:31:45.703614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5d70 is same with the state(6) to be set 00:09:58.239 Initializing NVMe Controllers 00:09:58.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:58.239 Initialization complete. Launching workers. 00:09:58.239 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 107 00:09:58.239 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 389, failed to submit 38 00:09:58.239 success 193, unsuccessful 196, failed 0 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.239 rmmod nvme_tcp 00:09:58.239 rmmod nvme_fabrics 00:09:58.239 rmmod nvme_keyring 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1637901 ']' 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1637901 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1637901 ']' 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1637901 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1637901 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1637901' 00:09:58.239 killing process with pid 1637901 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1637901 00:09:58.239 05:31:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1637901 00:09:58.239 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.239 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.239 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.239 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:58.239 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:58.239 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.239 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.240 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.240 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:58.240 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.240 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.240 05:31:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.147 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:00.147 00:10:00.147 real 0m32.123s 00:10:00.147 user 0m43.065s 00:10:00.147 sys 0m11.015s 00:10:00.147 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.147 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.147 ************************************ 00:10:00.147 END TEST nvmf_zcopy 00:10:00.147 ************************************ 00:10:00.147 05:31:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.147 05:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.148 05:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.148 05:31:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.148 ************************************ 00:10:00.148 START TEST nvmf_nmic 00:10:00.148 ************************************ 00:10:00.148 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:00.407 * Looking for test storage... 00:10:00.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.407 --rc genhtml_branch_coverage=1 00:10:00.407 --rc genhtml_function_coverage=1 00:10:00.407 --rc genhtml_legend=1 00:10:00.407 --rc geninfo_all_blocks=1 00:10:00.407 --rc geninfo_unexecuted_blocks=1 00:10:00.407 00:10:00.407 ' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.407 --rc genhtml_branch_coverage=1 00:10:00.407 --rc genhtml_function_coverage=1 00:10:00.407 --rc genhtml_legend=1 00:10:00.407 --rc geninfo_all_blocks=1 00:10:00.407 --rc geninfo_unexecuted_blocks=1 00:10:00.407 00:10:00.407 ' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.407 --rc genhtml_branch_coverage=1 00:10:00.407 --rc genhtml_function_coverage=1 00:10:00.407 --rc genhtml_legend=1 00:10:00.407 --rc geninfo_all_blocks=1 00:10:00.407 --rc geninfo_unexecuted_blocks=1 00:10:00.407 00:10:00.407 ' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.407 --rc genhtml_branch_coverage=1 00:10:00.407 --rc genhtml_function_coverage=1 00:10:00.407 --rc genhtml_legend=1 00:10:00.407 --rc geninfo_all_blocks=1 00:10:00.407 --rc geninfo_unexecuted_blocks=1 00:10:00.407 00:10:00.407 ' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:00.407 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:00.408 05:31:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.978 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.978 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.979 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.979 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.979 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.979 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.979 05:31:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:10:06.979 00:10:06.979 --- 10.0.0.2 ping statistics --- 00:10:06.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.979 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:10:06.979 00:10:06.979 --- 10.0.0.1 ping statistics --- 00:10:06.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.979 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.979 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1645356 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1645356 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1645356 ']' 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.980 05:31:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.980 [2024-11-27 05:31:54.343520] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:06.980 [2024-11-27 05:31:54.343572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.980 [2024-11-27 05:31:54.424968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.980 [2024-11-27 05:31:54.469483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.980 [2024-11-27 05:31:54.469521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.980 [2024-11-27 05:31:54.469530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.980 [2024-11-27 05:31:54.469540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.980 [2024-11-27 05:31:54.469545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.980 [2024-11-27 05:31:54.471064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.980 [2024-11-27 05:31:54.471176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.980 [2024-11-27 05:31:54.471282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.980 [2024-11-27 05:31:54.471283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.239 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.239 [2024-11-27 05:31:55.235718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 Malloc0 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 [2024-11-27 05:31:55.305814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:07.498 test case1: single bdev can't be used in multiple subsystems 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 [2024-11-27 05:31:55.333724] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:07.498 [2024-11-27 05:31:55.333746] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:07.498 [2024-11-27 05:31:55.333755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.498 request: 00:10:07.498 { 00:10:07.498 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:07.498 "namespace": { 00:10:07.498 "bdev_name": "Malloc0", 00:10:07.498 "no_auto_visible": false, 00:10:07.498 "hide_metadata": false 00:10:07.498 }, 00:10:07.498 "method": "nvmf_subsystem_add_ns", 00:10:07.498 "req_id": 1 00:10:07.498 } 00:10:07.498 Got JSON-RPC error response 00:10:07.498 response: 00:10:07.498 { 00:10:07.498 "code": -32602, 00:10:07.498 "message": "Invalid parameters" 00:10:07.498 } 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:07.498 Adding namespace failed - expected result. 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:07.498 test case2: host connect to nvmf target in multiple paths 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:07.498 [2024-11-27 05:31:55.345874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.498 05:31:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.876 05:31:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:09.813 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:09.813 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:09.813 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.813 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:09.813 05:31:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:12.352 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:12.352 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:12.352 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.352 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:12.352 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.352 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:12.352 05:31:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:12.352 [global] 00:10:12.352 thread=1 00:10:12.352 invalidate=1 00:10:12.352 rw=write 00:10:12.352 time_based=1 00:10:12.352 runtime=1 00:10:12.352 ioengine=libaio 00:10:12.352 direct=1 00:10:12.352 bs=4096 00:10:12.352 iodepth=1 00:10:12.352 norandommap=0 00:10:12.352 numjobs=1 00:10:12.352 00:10:12.352 verify_dump=1 00:10:12.352 verify_backlog=512 00:10:12.352 verify_state_save=0 00:10:12.352 do_verify=1 00:10:12.352 verify=crc32c-intel 00:10:12.352 [job0] 00:10:12.352 filename=/dev/nvme0n1 00:10:12.352 Could not set queue depth (nvme0n1) 00:10:12.352 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.352 fio-3.35 00:10:12.352 Starting 1 thread 00:10:13.289 00:10:13.289 job0: (groupid=0, jobs=1): err= 0: pid=1646439: Wed Nov 27 05:32:01 2024 00:10:13.289 read: IOPS=22, BW=89.6KiB/s (91.7kB/s)(92.0KiB/1027msec) 00:10:13.289 slat (nsec): min=9893, max=24323, avg=21401.00, stdev=2665.09 00:10:13.289 clat (usec): min=40612, max=41865, avg=40995.19, stdev=214.20 00:10:13.289 lat (usec): min=40622, max=41888, avg=41016.59, stdev=215.26 00:10:13.289 clat percentiles (usec): 00:10:13.289 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:13.289 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:13.289 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:13.289 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:13.289 | 99.99th=[41681] 00:10:13.289 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:13.289 slat (nsec): min=9720, max=50701, avg=11002.81, stdev=2328.04 00:10:13.289 clat (usec): min=118, max=376, avg=148.44, stdev=21.82 00:10:13.289 lat (usec): min=128, max=427, avg=159.44, stdev=22.85 00:10:13.289 clat percentiles (usec): 00:10:13.289 | 1.00th=[ 121], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 126], 00:10:13.289 | 30.00th=[ 131], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 157], 00:10:13.289 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:10:13.289 | 99.00th=[ 186], 99.50th=[ 251], 99.90th=[ 375], 99.95th=[ 375], 00:10:13.289 | 99.99th=[ 375] 00:10:13.289 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.289 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.289 lat (usec) : 250=95.14%, 500=0.56% 00:10:13.289 lat (msec) : 50=4.30% 00:10:13.289 cpu : usr=0.39%, sys=0.88%, ctx=535, majf=0, minf=1 00:10:13.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.289 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.289 00:10:13.289 Run status group 0 (all jobs): 00:10:13.289 READ: bw=89.6KiB/s (91.7kB/s), 89.6KiB/s-89.6KiB/s (91.7kB/s-91.7kB/s), io=92.0KiB (94.2kB), run=1027-1027msec 00:10:13.289 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:10:13.289 00:10:13.289 Disk stats (read/write): 00:10:13.289 nvme0n1: ios=69/512, merge=0/0, ticks=1004/72, in_queue=1076, util=95.29% 00:10:13.289 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.549 rmmod nvme_tcp 00:10:13.549 rmmod nvme_fabrics 00:10:13.549 rmmod nvme_keyring 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1645356 ']' 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1645356 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1645356 ']' 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1645356 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.549 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645356 00:10:13.808 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645356' 00:10:13.809 killing process with pid 1645356 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1645356 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1645356 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.809 05:32:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.346 00:10:16.346 real 0m15.702s 00:10:16.346 user 0m36.358s 00:10:16.346 sys 0m5.237s 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.346 ************************************ 00:10:16.346 END TEST nvmf_nmic 00:10:16.346 ************************************ 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.346 ************************************ 00:10:16.346 START TEST nvmf_fio_target 00:10:16.346 ************************************ 00:10:16.346 05:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:16.346 * Looking for test storage... 00:10:16.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.346 --rc genhtml_branch_coverage=1 00:10:16.346 --rc genhtml_function_coverage=1 00:10:16.346 --rc genhtml_legend=1 00:10:16.346 --rc geninfo_all_blocks=1 00:10:16.346 --rc geninfo_unexecuted_blocks=1 00:10:16.346 00:10:16.346 ' 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.346 --rc genhtml_branch_coverage=1 00:10:16.346 --rc genhtml_function_coverage=1 00:10:16.346 --rc genhtml_legend=1 00:10:16.346 --rc geninfo_all_blocks=1 00:10:16.346 --rc geninfo_unexecuted_blocks=1 00:10:16.346 00:10:16.346 ' 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.346 --rc genhtml_branch_coverage=1 00:10:16.346 --rc genhtml_function_coverage=1 00:10:16.346 --rc genhtml_legend=1 00:10:16.346 --rc geninfo_all_blocks=1 00:10:16.346 --rc geninfo_unexecuted_blocks=1 00:10:16.346 00:10:16.346 ' 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:16.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.346 --rc genhtml_branch_coverage=1 00:10:16.346 --rc genhtml_function_coverage=1 00:10:16.346 --rc genhtml_legend=1 00:10:16.346 --rc geninfo_all_blocks=1 00:10:16.346 --rc geninfo_unexecuted_blocks=1 00:10:16.346 00:10:16.346 ' 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.346 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.347 05:32:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.924 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:22.925 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:22.925 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:22.925 Found net devices under 0000:86:00.0: cvl_0_0 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:22.925 Found net devices under 0000:86:00.1: cvl_0_1 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.925 05:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:10:22.925 00:10:22.925 --- 10.0.0.2 ping statistics --- 00:10:22.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.925 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:22.925 00:10:22.925 --- 10.0.0.1 ping statistics --- 00:10:22.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.925 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1650210 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1650210 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1650210 ']' 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.925 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.925 [2024-11-27 05:32:10.116269] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:22.925 [2024-11-27 05:32:10.116318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.925 [2024-11-27 05:32:10.197794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.926 [2024-11-27 05:32:10.240623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.926 [2024-11-27 05:32:10.240663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.926 [2024-11-27 05:32:10.240677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.926 [2024-11-27 05:32:10.240685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.926 [2024-11-27 05:32:10.240692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.926 [2024-11-27 05:32:10.242208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.926 [2024-11-27 05:32:10.242314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.926 [2024-11-27 05:32:10.242421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.926 [2024-11-27 05:32:10.242423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.184 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.184 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:23.184 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:23.184 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.184 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.184 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.184 05:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:23.184 [2024-11-27 05:32:11.158736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.443 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.443 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:23.443 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.701 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:23.702 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.961 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:23.961 05:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.220 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:24.220 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:24.479 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.479 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:24.479 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.738 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:24.738 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.997 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:24.997 05:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:25.256 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.515 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.515 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.515 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.515 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.774 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.033 [2024-11-27 05:32:13.834444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.033 05:32:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:26.293 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:26.293 05:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.672 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:27.672 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:27.672 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.672 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:27.672 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:27.672 05:32:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:29.578 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:29.578 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:29.578 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.578 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:29.578 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.578 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:29.578 05:32:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:29.578 [global] 00:10:29.578 thread=1 00:10:29.578 invalidate=1 00:10:29.578 rw=write 00:10:29.578 time_based=1 00:10:29.578 runtime=1 00:10:29.578 ioengine=libaio 00:10:29.578 direct=1 00:10:29.578 bs=4096 00:10:29.578 iodepth=1 00:10:29.579 norandommap=0 00:10:29.579 numjobs=1 00:10:29.579 00:10:29.579 verify_dump=1 00:10:29.579 verify_backlog=512 00:10:29.579 verify_state_save=0 00:10:29.579 do_verify=1 00:10:29.579 verify=crc32c-intel 00:10:29.579 [job0] 00:10:29.579 filename=/dev/nvme0n1 00:10:29.579 [job1] 00:10:29.579 filename=/dev/nvme0n2 00:10:29.579 [job2] 00:10:29.579 filename=/dev/nvme0n3 00:10:29.579 [job3] 00:10:29.579 filename=/dev/nvme0n4 00:10:29.579 Could not set queue depth (nvme0n1) 00:10:29.579 Could not set queue depth (nvme0n2) 00:10:29.579 Could not set queue depth (nvme0n3) 00:10:29.579 Could not set queue depth (nvme0n4) 00:10:29.837 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.837 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.837 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.837 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.837 fio-3.35 00:10:29.837 Starting 4 threads 00:10:31.242 00:10:31.242 job0: (groupid=0, jobs=1): err= 0: pid=1651781: Wed Nov 27 05:32:18 2024 00:10:31.242 read: IOPS=2331, BW=9327KiB/s (9551kB/s)(9336KiB/1001msec) 00:10:31.242 slat (nsec): min=6428, max=27324, avg=7232.93, stdev=1061.21 00:10:31.242 clat (usec): min=187, max=1109, avg=237.70, stdev=31.36 00:10:31.242 lat (usec): min=194, max=1118, avg=244.93, stdev=31.44 00:10:31.242 clat percentiles (usec): 00:10:31.242 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:10:31.242 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:10:31.242 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:10:31.242 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 766], 99.95th=[ 807], 00:10:31.242 | 99.99th=[ 1106] 00:10:31.242 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:31.242 slat (nsec): min=9461, max=36760, avg=10415.74, stdev=987.07 00:10:31.242 clat (usec): min=112, max=319, avg=152.76, stdev=17.13 00:10:31.242 lat (usec): min=122, max=330, avg=163.18, stdev=17.24 00:10:31.242 clat percentiles (usec): 00:10:31.242 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:10:31.242 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:10:31.242 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 178], 00:10:31.242 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 273], 00:10:31.242 | 99.99th=[ 318] 00:10:31.242 bw ( KiB/s): min=11832, max=11832, per=38.86%, avg=11832.00, stdev= 0.00, samples=1 00:10:31.242 iops : min= 2958, max= 2958, avg=2958.00, stdev= 0.00, samples=1 00:10:31.242 lat (usec) : 250=89.80%, 500=10.11%, 750=0.02%, 1000=0.04% 00:10:31.242 lat (msec) : 2=0.02% 00:10:31.242 cpu : usr=3.20%, sys=3.70%, ctx=4895, majf=0, minf=1 00:10:31.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.242 issued rwts: total=2334,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.242 job1: (groupid=0, jobs=1): err= 0: pid=1651782: Wed Nov 27 05:32:18 2024 00:10:31.242 read: IOPS=1709, BW=6838KiB/s (7003kB/s)(6900KiB/1009msec) 00:10:31.242 slat (nsec): min=2286, max=26375, avg=7720.86, stdev=2715.82 00:10:31.242 clat (usec): min=185, max=41885, avg=355.85, stdev=1973.38 00:10:31.242 lat (usec): min=194, max=41910, avg=363.57, stdev=1973.80 00:10:31.242 clat percentiles (usec): 00:10:31.242 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 235], 00:10:31.242 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:10:31.242 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 334], 00:10:31.242 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[41157], 99.95th=[41681], 00:10:31.242 | 99.99th=[41681] 00:10:31.242 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:10:31.242 slat (usec): min=3, max=769, avg=11.32, stdev=20.37 00:10:31.242 clat (usec): min=118, max=298, avg=169.68, stdev=20.44 00:10:31.242 lat (usec): min=125, max=949, avg=181.00, stdev=28.30 00:10:31.242 clat percentiles (usec): 00:10:31.242 | 1.00th=[ 133], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:10:31.242 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:31.242 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 206], 00:10:31.242 | 99.00th=[ 231], 99.50th=[ 247], 99.90th=[ 289], 99.95th=[ 293], 00:10:31.242 | 99.99th=[ 297] 00:10:31.242 bw ( KiB/s): min= 7176, max= 9208, per=26.91%, avg=8192.00, stdev=1436.84, samples=2 00:10:31.242 iops : min= 1794, max= 2302, avg=2048.00, stdev=359.21, samples=2 00:10:31.242 lat (usec) : 250=74.69%, 500=24.97%, 750=0.24% 00:10:31.242 lat (msec) : 50=0.11% 00:10:31.242 cpu : usr=2.58%, sys=5.65%, ctx=3780, majf=0, minf=1 00:10:31.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.242 issued rwts: total=1725,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.242 job2: (groupid=0, jobs=1): err= 0: pid=1651784: Wed Nov 27 05:32:18 2024 00:10:31.242 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:10:31.242 slat (nsec): min=12110, max=24677, avg=16603.64, stdev=3852.44 00:10:31.242 clat (usec): min=40701, max=41041, avg=40955.97, stdev=71.10 00:10:31.242 lat (usec): min=40713, max=41062, avg=40972.57, stdev=71.86 00:10:31.242 clat percentiles (usec): 00:10:31.242 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:31.242 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:31.242 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:31.242 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.242 | 99.99th=[41157] 00:10:31.242 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:31.242 slat (nsec): min=12992, max=39266, avg=15187.12, stdev=2878.54 00:10:31.242 clat (usec): min=148, max=304, avg=177.87, stdev=16.37 00:10:31.242 lat (usec): min=162, max=317, avg=193.06, stdev=17.12 00:10:31.242 clat percentiles (usec): 00:10:31.242 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:10:31.242 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:10:31.242 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 204], 00:10:31.242 | 99.00th=[ 221], 99.50th=[ 262], 99.90th=[ 306], 99.95th=[ 306], 00:10:31.242 | 99.99th=[ 306] 00:10:31.242 bw ( KiB/s): min= 4096, max= 4096, per=13.45%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.242 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.242 lat (usec) : 250=95.13%, 500=0.75% 00:10:31.242 lat (msec) : 50=4.12% 00:10:31.242 cpu : usr=0.70%, sys=0.90%, ctx=535, majf=0, minf=1 00:10:31.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.242 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.242 job3: (groupid=0, jobs=1): err= 0: pid=1651785: Wed Nov 27 05:32:18 2024 00:10:31.242 read: IOPS=2049, BW=8200KiB/s (8397kB/s)(8208KiB/1001msec) 00:10:31.242 slat (nsec): min=7340, max=27297, avg=8520.74, stdev=1347.22 00:10:31.242 clat (usec): min=178, max=1279, avg=247.08, stdev=44.56 00:10:31.242 lat (usec): min=186, max=1288, avg=255.60, stdev=44.65 00:10:31.243 clat percentiles (usec): 00:10:31.243 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 217], 00:10:31.243 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 245], 60.00th=[ 253], 00:10:31.243 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 302], 00:10:31.243 | 99.00th=[ 420], 99.50th=[ 469], 99.90th=[ 498], 99.95th=[ 515], 00:10:31.243 | 99.99th=[ 1287] 00:10:31.243 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:31.243 slat (nsec): min=10462, max=56478, avg=11867.42, stdev=1977.36 00:10:31.243 clat (usec): min=125, max=1282, avg=168.49, stdev=36.40 00:10:31.243 lat (usec): min=135, max=1296, avg=180.35, stdev=36.59 00:10:31.243 clat percentiles (usec): 00:10:31.243 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:10:31.243 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:10:31.243 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 217], 00:10:31.243 | 99.00th=[ 281], 99.50th=[ 310], 99.90th=[ 502], 99.95th=[ 644], 00:10:31.243 | 99.99th=[ 1287] 00:10:31.243 bw ( KiB/s): min= 9648, max= 9648, per=31.69%, avg=9648.00, stdev= 0.00, samples=1 00:10:31.243 iops : min= 2412, max= 2412, avg=2412.00, stdev= 0.00, samples=1 00:10:31.243 lat (usec) : 250=79.21%, 500=20.69%, 750=0.07% 00:10:31.243 lat (msec) : 2=0.04% 00:10:31.243 cpu : usr=4.40%, sys=7.00%, ctx=4612, majf=0, minf=2 00:10:31.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.243 issued rwts: total=2052,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.243 00:10:31.243 Run status group 0 (all jobs): 00:10:31.243 READ: bw=23.7MiB/s (24.9MB/s), 87.8KiB/s-9327KiB/s (89.9kB/s-9551kB/s), io=24.0MiB (25.1MB), run=1001-1009msec 00:10:31.243 WRITE: bw=29.7MiB/s (31.2MB/s), 2044KiB/s-9.99MiB/s (2093kB/s-10.5MB/s), io=30.0MiB (31.5MB), run=1001-1009msec 00:10:31.243 00:10:31.243 Disk stats (read/write): 00:10:31.243 nvme0n1: ios=2082/2048, merge=0/0, ticks=1251/309, in_queue=1560, util=85.16% 00:10:31.243 nvme0n2: ios=1615/2048, merge=0/0, ticks=532/324, in_queue=856, util=89.04% 00:10:31.243 nvme0n3: ios=75/512, merge=0/0, ticks=1589/76, in_queue=1665, util=93.38% 00:10:31.243 nvme0n4: ios=1839/2048, merge=0/0, ticks=467/331, in_queue=798, util=95.34% 00:10:31.243 05:32:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:31.243 [global] 00:10:31.243 thread=1 00:10:31.243 invalidate=1 00:10:31.243 rw=randwrite 00:10:31.243 time_based=1 00:10:31.243 runtime=1 00:10:31.243 ioengine=libaio 00:10:31.243 direct=1 00:10:31.243 bs=4096 00:10:31.243 iodepth=1 00:10:31.243 norandommap=0 00:10:31.243 numjobs=1 00:10:31.243 00:10:31.243 verify_dump=1 00:10:31.243 verify_backlog=512 00:10:31.243 verify_state_save=0 00:10:31.243 do_verify=1 00:10:31.243 verify=crc32c-intel 00:10:31.243 [job0] 00:10:31.243 filename=/dev/nvme0n1 00:10:31.243 [job1] 00:10:31.243 filename=/dev/nvme0n2 00:10:31.243 [job2] 00:10:31.243 filename=/dev/nvme0n3 00:10:31.243 [job3] 00:10:31.243 filename=/dev/nvme0n4 00:10:31.243 Could not set queue depth (nvme0n1) 00:10:31.243 Could not set queue depth (nvme0n2) 00:10:31.243 Could not set queue depth (nvme0n3) 00:10:31.243 Could not set queue depth (nvme0n4) 00:10:31.502 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.502 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.502 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.502 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.502 fio-3.35 00:10:31.502 Starting 4 threads 00:10:32.870 00:10:32.870 job0: (groupid=0, jobs=1): err= 0: pid=1652160: Wed Nov 27 05:32:20 2024 00:10:32.870 read: IOPS=1032, BW=4132KiB/s (4231kB/s)(4140KiB/1002msec) 00:10:32.870 slat (nsec): min=7778, max=24217, avg=9115.81, stdev=1980.58 00:10:32.870 clat (usec): min=221, max=41982, avg=637.87, stdev=3784.63 00:10:32.870 lat (usec): min=230, max=42006, avg=646.98, stdev=3785.69 00:10:32.870 clat percentiles (usec): 00:10:32.870 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:10:32.870 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:32.870 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 338], 00:10:32.870 | 99.00th=[ 644], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:10:32.870 | 99.99th=[42206] 00:10:32.870 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:10:32.870 slat (nsec): min=7344, max=70182, avg=11792.94, stdev=2562.07 00:10:32.870 clat (usec): min=120, max=1879, avg=199.13, stdev=65.00 00:10:32.870 lat (usec): min=131, max=1890, avg=210.93, stdev=65.20 00:10:32.870 clat percentiles (usec): 00:10:32.870 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 165], 00:10:32.870 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:10:32.870 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 273], 95.00th=[ 326], 00:10:32.870 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 482], 99.95th=[ 1876], 00:10:32.870 | 99.99th=[ 1876] 00:10:32.870 bw ( KiB/s): min= 4096, max= 8192, per=20.73%, avg=6144.00, stdev=2896.31, samples=2 00:10:32.870 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:32.870 lat (usec) : 250=54.49%, 500=45.00%, 750=0.12% 00:10:32.870 lat (msec) : 2=0.04%, 50=0.35% 00:10:32.870 cpu : usr=1.90%, sys=4.30%, ctx=2572, majf=0, minf=1 00:10:32.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.870 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.870 job1: (groupid=0, jobs=1): err= 0: pid=1652161: Wed Nov 27 05:32:20 2024 00:10:32.870 read: IOPS=2139, BW=8559KiB/s (8765kB/s)(8568KiB/1001msec) 00:10:32.870 slat (nsec): min=8260, max=44665, avg=9431.92, stdev=1555.43 00:10:32.870 clat (usec): min=193, max=1373, avg=241.39, stdev=39.11 00:10:32.870 lat (usec): min=202, max=1381, avg=250.82, stdev=39.21 00:10:32.870 clat percentiles (usec): 00:10:32.870 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:10:32.870 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:10:32.870 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 273], 00:10:32.870 | 99.00th=[ 351], 99.50th=[ 486], 99.90th=[ 537], 99.95th=[ 668], 00:10:32.870 | 99.99th=[ 1369] 00:10:32.870 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:32.870 slat (nsec): min=11646, max=52933, avg=12965.43, stdev=1836.88 00:10:32.870 clat (usec): min=126, max=347, avg=161.87, stdev=13.88 00:10:32.870 lat (usec): min=139, max=365, avg=174.83, stdev=14.16 00:10:32.870 clat percentiles (usec): 00:10:32.870 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:10:32.870 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:10:32.870 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 186], 00:10:32.870 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 273], 99.95th=[ 285], 00:10:32.870 | 99.99th=[ 347] 00:10:32.870 bw ( KiB/s): min=10944, max=10944, per=36.92%, avg=10944.00, stdev= 0.00, samples=1 00:10:32.870 iops : min= 2736, max= 2736, avg=2736.00, stdev= 0.00, samples=1 00:10:32.870 lat (usec) : 250=88.30%, 500=11.59%, 750=0.09% 00:10:32.870 lat (msec) : 2=0.02% 00:10:32.870 cpu : usr=3.80%, sys=8.60%, ctx=4705, majf=0, minf=1 00:10:32.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.870 issued rwts: total=2142,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.870 job2: (groupid=0, jobs=1): err= 0: pid=1652162: Wed Nov 27 05:32:20 2024 00:10:32.870 read: IOPS=965, BW=3863KiB/s (3955kB/s)(3940KiB/1020msec) 00:10:32.870 slat (nsec): min=7531, max=30722, avg=9807.58, stdev=1875.64 00:10:32.870 clat (usec): min=216, max=41763, avg=829.45, stdev=4811.45 00:10:32.870 lat (usec): min=225, max=41772, avg=839.25, stdev=4811.58 00:10:32.870 clat percentiles (usec): 00:10:32.870 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:10:32.870 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:32.871 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:10:32.871 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41681], 99.95th=[41681], 00:10:32.871 | 99.99th=[41681] 00:10:32.871 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:10:32.871 slat (nsec): min=10037, max=42101, avg=12701.92, stdev=2010.74 00:10:32.871 clat (usec): min=138, max=380, avg=169.41, stdev=14.85 00:10:32.871 lat (usec): min=150, max=393, avg=182.11, stdev=15.37 00:10:32.871 clat percentiles (usec): 00:10:32.871 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:10:32.871 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 172], 00:10:32.871 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:10:32.871 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 322], 99.95th=[ 383], 00:10:32.871 | 99.99th=[ 383] 00:10:32.871 bw ( KiB/s): min= 8192, max= 8192, per=27.64%, avg=8192.00, stdev= 0.00, samples=1 00:10:32.871 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:32.871 lat (usec) : 250=77.00%, 500=22.30% 00:10:32.871 lat (msec) : 50=0.70% 00:10:32.871 cpu : usr=1.47%, sys=2.16%, ctx=2010, majf=0, minf=1 00:10:32.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.871 issued rwts: total=985,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.871 job3: (groupid=0, jobs=1): err= 0: pid=1652163: Wed Nov 27 05:32:20 2024 00:10:32.871 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:32.871 slat (nsec): min=7208, max=40841, avg=8414.24, stdev=1340.31 00:10:32.871 clat (usec): min=182, max=690, avg=251.40, stdev=44.32 00:10:32.871 lat (usec): min=190, max=699, avg=259.82, stdev=44.42 00:10:32.871 clat percentiles (usec): 00:10:32.871 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:10:32.871 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 251], 60.00th=[ 269], 00:10:32.871 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 314], 00:10:32.871 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 570], 99.95th=[ 660], 00:10:32.871 | 99.99th=[ 693] 00:10:32.871 write: IOPS=2436, BW=9746KiB/s (9980kB/s)(9756KiB/1001msec); 0 zone resets 00:10:32.871 slat (nsec): min=10076, max=45258, avg=11377.34, stdev=1778.75 00:10:32.871 clat (usec): min=120, max=333, avg=175.03, stdev=23.89 00:10:32.871 lat (usec): min=131, max=345, avg=186.41, stdev=24.03 00:10:32.871 clat percentiles (usec): 00:10:32.871 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:32.871 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:10:32.871 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 219], 00:10:32.871 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 318], 00:10:32.871 | 99.99th=[ 334] 00:10:32.871 bw ( KiB/s): min= 8192, max= 8192, per=27.64%, avg=8192.00, stdev= 0.00, samples=1 00:10:32.871 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:32.871 lat (usec) : 250=76.15%, 500=23.74%, 750=0.11% 00:10:32.871 cpu : usr=3.60%, sys=7.30%, ctx=4487, majf=0, minf=2 00:10:32.871 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.871 issued rwts: total=2048,2439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.871 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.871 00:10:32.871 Run status group 0 (all jobs): 00:10:32.871 READ: bw=23.8MiB/s (24.9MB/s), 3863KiB/s-8559KiB/s (3955kB/s-8765kB/s), io=24.3MiB (25.4MB), run=1001-1020msec 00:10:32.871 WRITE: bw=28.9MiB/s (30.4MB/s), 4016KiB/s-9.99MiB/s (4112kB/s-10.5MB/s), io=29.5MiB (31.0MB), run=1001-1020msec 00:10:32.871 00:10:32.871 Disk stats (read/write): 00:10:32.871 nvme0n1: ios=1071/1536, merge=0/0, ticks=829/289, in_queue=1118, util=99.50% 00:10:32.871 nvme0n2: ios=1948/2048, merge=0/0, ticks=1420/317, in_queue=1737, util=99.90% 00:10:32.871 nvme0n3: ios=1002/1024, merge=0/0, ticks=1597/164, in_queue=1761, util=100.00% 00:10:32.871 nvme0n4: ios=1708/2048, merge=0/0, ticks=423/347, in_queue=770, util=89.74% 00:10:32.871 05:32:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:32.871 [global] 00:10:32.871 thread=1 00:10:32.871 invalidate=1 00:10:32.871 rw=write 00:10:32.871 time_based=1 00:10:32.871 runtime=1 00:10:32.871 ioengine=libaio 00:10:32.871 direct=1 00:10:32.871 bs=4096 00:10:32.871 iodepth=128 00:10:32.871 norandommap=0 00:10:32.871 numjobs=1 00:10:32.871 00:10:32.871 verify_dump=1 00:10:32.871 verify_backlog=512 00:10:32.871 verify_state_save=0 00:10:32.871 do_verify=1 00:10:32.871 verify=crc32c-intel 00:10:32.871 [job0] 00:10:32.871 filename=/dev/nvme0n1 00:10:32.871 [job1] 00:10:32.871 filename=/dev/nvme0n2 00:10:32.871 [job2] 00:10:32.871 filename=/dev/nvme0n3 00:10:32.871 [job3] 00:10:32.871 filename=/dev/nvme0n4 00:10:32.871 Could not set queue depth (nvme0n1) 00:10:32.871 Could not set queue depth (nvme0n2) 00:10:32.871 Could not set queue depth (nvme0n3) 00:10:32.871 Could not set queue depth (nvme0n4) 00:10:32.871 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.871 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.871 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.871 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:32.871 fio-3.35 00:10:32.871 Starting 4 threads 00:10:34.301 00:10:34.301 job0: (groupid=0, jobs=1): err= 0: pid=1652529: Wed Nov 27 05:32:22 2024 00:10:34.301 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:10:34.301 slat (nsec): min=1096, max=31693k, avg=119494.05, stdev=947799.10 00:10:34.301 clat (usec): min=2372, max=43760, avg=15062.35, stdev=8316.26 00:10:34.301 lat (usec): min=2380, max=55452, avg=15181.84, stdev=8361.30 00:10:34.301 clat percentiles (usec): 00:10:34.301 | 1.00th=[ 3949], 5.00th=[ 7373], 10.00th=[ 8586], 20.00th=[ 9896], 00:10:34.301 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11469], 60.00th=[12387], 00:10:34.301 | 70.00th=[15139], 80.00th=[20841], 90.00th=[28443], 95.00th=[34866], 00:10:34.301 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:34.301 | 99.99th=[43779] 00:10:34.301 write: IOPS=4463, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1005msec); 0 zone resets 00:10:34.301 slat (nsec): min=1980, max=20959k, avg=106830.49, stdev=792256.58 00:10:34.301 clat (usec): min=449, max=56984, avg=14482.99, stdev=8629.94 00:10:34.301 lat (usec): min=461, max=57016, avg=14589.82, stdev=8691.81 00:10:34.301 clat percentiles (usec): 00:10:34.301 | 1.00th=[ 1221], 5.00th=[ 4817], 10.00th=[ 6980], 20.00th=[ 8455], 00:10:34.301 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11731], 60.00th=[13173], 00:10:34.301 | 70.00th=[15270], 80.00th=[19792], 90.00th=[25560], 95.00th=[34866], 00:10:34.301 | 99.00th=[42730], 99.50th=[49546], 99.90th=[56361], 99.95th=[56361], 00:10:34.301 | 99.99th=[56886] 00:10:34.301 bw ( KiB/s): min=17168, max=17704, per=25.38%, avg=17436.00, stdev=379.01, samples=2 00:10:34.301 iops : min= 4292, max= 4426, avg=4359.00, stdev=94.75, samples=2 00:10:34.301 lat (usec) : 500=0.06%, 750=0.05% 00:10:34.301 lat (msec) : 2=0.84%, 4=1.29%, 10=22.01%, 20=56.13%, 50=19.42% 00:10:34.301 lat (msec) : 100=0.20% 00:10:34.301 cpu : usr=2.29%, sys=4.48%, ctx=431, majf=0, minf=1 00:10:34.301 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:34.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.301 issued rwts: total=4096,4486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.301 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.301 job1: (groupid=0, jobs=1): err= 0: pid=1652530: Wed Nov 27 05:32:22 2024 00:10:34.301 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:10:34.301 slat (nsec): min=1119, max=15746k, avg=85286.47, stdev=612158.55 00:10:34.301 clat (usec): min=1590, max=58087, avg=11530.25, stdev=5464.85 00:10:34.301 lat (usec): min=1615, max=58110, avg=11615.54, stdev=5507.44 00:10:34.301 clat percentiles (usec): 00:10:34.301 | 1.00th=[ 3654], 5.00th=[ 4178], 10.00th=[ 5276], 20.00th=[ 7504], 00:10:34.301 | 30.00th=[ 8094], 40.00th=[ 9372], 50.00th=[11076], 60.00th=[13173], 00:10:34.301 | 70.00th=[13698], 80.00th=[15270], 90.00th=[17171], 95.00th=[18744], 00:10:34.302 | 99.00th=[29754], 99.50th=[30278], 99.90th=[53216], 99.95th=[53216], 00:10:34.302 | 99.99th=[57934] 00:10:34.302 write: IOPS=5133, BW=20.1MiB/s (21.0MB/s)(20.2MiB/1009msec); 0 zone resets 00:10:34.302 slat (nsec): min=1894, max=14142k, avg=97364.83, stdev=674985.37 00:10:34.302 clat (usec): min=2367, max=52746, avg=13162.68, stdev=6349.52 00:10:34.302 lat (usec): min=2398, max=56907, avg=13260.04, stdev=6409.96 00:10:34.302 clat percentiles (usec): 00:10:34.302 | 1.00th=[ 4015], 5.00th=[ 5735], 10.00th=[ 7177], 20.00th=[ 8029], 00:10:34.302 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[11994], 60.00th=[13042], 00:10:34.302 | 70.00th=[15270], 80.00th=[17957], 90.00th=[22152], 95.00th=[25560], 00:10:34.302 | 99.00th=[30802], 99.50th=[31327], 99.90th=[52167], 99.95th=[52691], 00:10:34.302 | 99.99th=[52691] 00:10:34.302 bw ( KiB/s): min=17992, max=22968, per=29.81%, avg=20480.00, stdev=3518.56, samples=2 00:10:34.302 iops : min= 4498, max= 5742, avg=5120.00, stdev=879.64, samples=2 00:10:34.302 lat (msec) : 2=0.01%, 4=2.01%, 10=39.73%, 20=48.56%, 50=9.32% 00:10:34.302 lat (msec) : 100=0.37% 00:10:34.302 cpu : usr=3.27%, sys=5.85%, ctx=375, majf=0, minf=1 00:10:34.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:34.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.302 issued rwts: total=5120,5180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.302 job2: (groupid=0, jobs=1): err= 0: pid=1652533: Wed Nov 27 05:32:22 2024 00:10:34.302 read: IOPS=3792, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1003msec) 00:10:34.302 slat (nsec): min=1107, max=19687k, avg=121531.57, stdev=853085.33 00:10:34.302 clat (usec): min=871, max=98200, avg=16493.57, stdev=11820.97 00:10:34.302 lat (usec): min=5509, max=98206, avg=16615.11, stdev=11888.29 00:10:34.302 clat percentiles (usec): 00:10:34.302 | 1.00th=[ 5997], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[11076], 00:10:34.302 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12387], 00:10:34.302 | 70.00th=[13566], 80.00th=[17433], 90.00th=[30802], 95.00th=[42206], 00:10:34.302 | 99.00th=[70779], 99.50th=[85459], 99.90th=[98042], 99.95th=[98042], 00:10:34.302 | 99.99th=[98042] 00:10:34.302 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:34.302 slat (usec): min=2, max=12120, avg=126.80, stdev=722.16 00:10:34.302 clat (msec): min=6, max=110, avg=15.63, stdev=13.19 00:10:34.302 lat (msec): min=6, max=110, avg=15.76, stdev=13.29 00:10:34.302 clat percentiles (msec): 00:10:34.302 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:10:34.302 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:10:34.302 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 24], 95.00th=[ 29], 00:10:34.302 | 99.00th=[ 100], 99.50th=[ 106], 99.90th=[ 111], 99.95th=[ 111], 00:10:34.302 | 99.99th=[ 111] 00:10:34.302 bw ( KiB/s): min=10608, max=22160, per=23.85%, avg=16384.00, stdev=8168.50, samples=2 00:10:34.302 iops : min= 2652, max= 5540, avg=4096.00, stdev=2042.12, samples=2 00:10:34.302 lat (usec) : 1000=0.01% 00:10:34.302 lat (msec) : 10=6.23%, 20=76.89%, 50=14.65%, 100=1.77%, 250=0.46% 00:10:34.302 cpu : usr=2.30%, sys=4.09%, ctx=396, majf=0, minf=1 00:10:34.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:34.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.302 issued rwts: total=3804,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.302 job3: (groupid=0, jobs=1): err= 0: pid=1652534: Wed Nov 27 05:32:22 2024 00:10:34.302 read: IOPS=3134, BW=12.2MiB/s (12.8MB/s)(12.4MiB/1010msec) 00:10:34.302 slat (nsec): min=1080, max=19010k, avg=148180.12, stdev=1069746.22 00:10:34.302 clat (usec): min=818, max=69021, avg=19774.49, stdev=12568.44 00:10:34.302 lat (usec): min=826, max=69088, avg=19922.67, stdev=12644.09 00:10:34.302 clat percentiles (usec): 00:10:34.302 | 1.00th=[ 1319], 5.00th=[10683], 10.00th=[12125], 20.00th=[12649], 00:10:34.302 | 30.00th=[13042], 40.00th=[13304], 50.00th=[15008], 60.00th=[17171], 00:10:34.302 | 70.00th=[20055], 80.00th=[21890], 90.00th=[33817], 95.00th=[47973], 00:10:34.302 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:10:34.302 | 99.99th=[68682] 00:10:34.302 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:10:34.302 slat (usec): min=2, max=15650, avg=140.94, stdev=829.85 00:10:34.302 clat (usec): min=1099, max=49742, avg=18263.41, stdev=9968.58 00:10:34.302 lat (usec): min=1107, max=60517, avg=18404.35, stdev=10055.70 00:10:34.302 clat percentiles (usec): 00:10:34.302 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[11076], 00:10:34.302 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[18220], 00:10:34.302 | 70.00th=[21627], 80.00th=[26608], 90.00th=[33817], 95.00th=[40109], 00:10:34.302 | 99.00th=[47973], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:10:34.302 | 99.99th=[49546] 00:10:34.302 bw ( KiB/s): min=12024, max=16384, per=20.68%, avg=14204.00, stdev=3082.99, samples=2 00:10:34.302 iops : min= 3006, max= 4096, avg=3551.00, stdev=770.75, samples=2 00:10:34.302 lat (usec) : 1000=0.24% 00:10:34.302 lat (msec) : 2=0.31%, 4=0.09%, 10=7.29%, 20=60.55%, 50=29.73% 00:10:34.302 lat (msec) : 100=1.79% 00:10:34.302 cpu : usr=1.88%, sys=4.36%, ctx=354, majf=0, minf=1 00:10:34.302 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:34.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.302 issued rwts: total=3166,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.302 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.302 00:10:34.302 Run status group 0 (all jobs): 00:10:34.302 READ: bw=62.6MiB/s (65.6MB/s), 12.2MiB/s-19.8MiB/s (12.8MB/s-20.8MB/s), io=63.2MiB (66.3MB), run=1003-1010msec 00:10:34.302 WRITE: bw=67.1MiB/s (70.3MB/s), 13.9MiB/s-20.1MiB/s (14.5MB/s-21.0MB/s), io=67.8MiB (71.0MB), run=1003-1010msec 00:10:34.302 00:10:34.302 Disk stats (read/write): 00:10:34.302 nvme0n1: ios=3637/3842, merge=0/0, ticks=21034/22691, in_queue=43725, util=92.89% 00:10:34.302 nvme0n2: ios=3624/3843, merge=0/0, ticks=26128/26289, in_queue=52417, util=95.37% 00:10:34.302 nvme0n3: ios=3632/3584, merge=0/0, ticks=19964/14654, in_queue=34618, util=97.83% 00:10:34.302 nvme0n4: ios=2491/2560, merge=0/0, ticks=28337/23319, in_queue=51656, util=96.35% 00:10:34.302 05:32:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:34.302 [global] 00:10:34.302 thread=1 00:10:34.302 invalidate=1 00:10:34.302 rw=randwrite 00:10:34.302 time_based=1 00:10:34.302 runtime=1 00:10:34.302 ioengine=libaio 00:10:34.302 direct=1 00:10:34.302 bs=4096 00:10:34.302 iodepth=128 00:10:34.302 norandommap=0 00:10:34.302 numjobs=1 00:10:34.302 00:10:34.302 verify_dump=1 00:10:34.302 verify_backlog=512 00:10:34.302 verify_state_save=0 00:10:34.302 do_verify=1 00:10:34.302 verify=crc32c-intel 00:10:34.302 [job0] 00:10:34.302 filename=/dev/nvme0n1 00:10:34.302 [job1] 00:10:34.302 filename=/dev/nvme0n2 00:10:34.302 [job2] 00:10:34.302 filename=/dev/nvme0n3 00:10:34.302 [job3] 00:10:34.302 filename=/dev/nvme0n4 00:10:34.302 Could not set queue depth (nvme0n1) 00:10:34.302 Could not set queue depth (nvme0n2) 00:10:34.302 Could not set queue depth (nvme0n3) 00:10:34.302 Could not set queue depth (nvme0n4) 00:10:34.592 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.592 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.592 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.592 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.592 fio-3.35 00:10:34.592 Starting 4 threads 00:10:36.033 00:10:36.033 job0: (groupid=0, jobs=1): err= 0: pid=1652912: Wed Nov 27 05:32:23 2024 00:10:36.033 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:10:36.033 slat (nsec): min=1335, max=10277k, avg=88125.44, stdev=635059.52 00:10:36.033 clat (usec): min=3990, max=28820, avg=10951.60, stdev=3444.33 00:10:36.033 lat (usec): min=3999, max=28827, avg=11039.72, stdev=3490.69 00:10:36.033 clat percentiles (usec): 00:10:36.033 | 1.00th=[ 6063], 5.00th=[ 7111], 10.00th=[ 8586], 20.00th=[ 9110], 00:10:36.033 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:10:36.033 | 70.00th=[11076], 80.00th=[12780], 90.00th=[14877], 95.00th=[17695], 00:10:36.033 | 99.00th=[25297], 99.50th=[27132], 99.90th=[28181], 99.95th=[28705], 00:10:36.033 | 99.99th=[28705] 00:10:36.033 write: IOPS=5920, BW=23.1MiB/s (24.2MB/s)(23.3MiB/1006msec); 0 zone resets 00:10:36.033 slat (usec): min=2, max=18109, avg=78.86, stdev=511.66 00:10:36.033 clat (usec): min=1998, max=35095, avg=11059.94, stdev=4932.69 00:10:36.033 lat (usec): min=2008, max=35099, avg=11138.81, stdev=4969.18 00:10:36.033 clat percentiles (usec): 00:10:36.033 | 1.00th=[ 3294], 5.00th=[ 5604], 10.00th=[ 6915], 20.00th=[ 7701], 00:10:36.033 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10028], 00:10:36.033 | 70.00th=[10421], 80.00th=[14877], 90.00th=[18220], 95.00th=[21890], 00:10:36.033 | 99.00th=[27919], 99.50th=[30802], 99.90th=[33817], 99.95th=[33817], 00:10:36.033 | 99.99th=[34866] 00:10:36.033 bw ( KiB/s): min=20480, max=26144, per=33.57%, avg=23312.00, stdev=4005.05, samples=2 00:10:36.033 iops : min= 5120, max= 6536, avg=5828.00, stdev=1001.26, samples=2 00:10:36.033 lat (msec) : 2=0.04%, 4=1.17%, 10=57.94%, 20=35.55%, 50=5.30% 00:10:36.033 cpu : usr=4.28%, sys=6.67%, ctx=567, majf=0, minf=1 00:10:36.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:36.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.034 issued rwts: total=5632,5956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.034 job1: (groupid=0, jobs=1): err= 0: pid=1652913: Wed Nov 27 05:32:23 2024 00:10:36.034 read: IOPS=2709, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1007msec) 00:10:36.034 slat (nsec): min=1169, max=20790k, avg=176793.33, stdev=1112950.81 00:10:36.034 clat (usec): min=247, max=57301, avg=22095.31, stdev=8320.61 00:10:36.034 lat (usec): min=3154, max=57326, avg=22272.10, stdev=8395.86 00:10:36.034 clat percentiles (usec): 00:10:36.034 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[13304], 20.00th=[15533], 00:10:36.034 | 30.00th=[16909], 40.00th=[18744], 50.00th=[21890], 60.00th=[23987], 00:10:36.034 | 70.00th=[25560], 80.00th=[26870], 90.00th=[33162], 95.00th=[38011], 00:10:36.034 | 99.00th=[49546], 99.50th=[49546], 99.90th=[51119], 99.95th=[55313], 00:10:36.034 | 99.99th=[57410] 00:10:36.034 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:10:36.034 slat (nsec): min=1931, max=17147k, avg=162689.28, stdev=770233.58 00:10:36.034 clat (usec): min=2987, max=53653, avg=21881.80, stdev=11381.75 00:10:36.034 lat (usec): min=2994, max=53662, avg=22044.49, stdev=11449.35 00:10:36.034 clat percentiles (usec): 00:10:36.034 | 1.00th=[ 4621], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[10421], 00:10:36.034 | 30.00th=[16057], 40.00th=[18744], 50.00th=[21103], 60.00th=[21890], 00:10:36.034 | 70.00th=[24773], 80.00th=[30278], 90.00th=[37487], 95.00th=[46924], 00:10:36.034 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:10:36.034 | 99.99th=[53740] 00:10:36.034 bw ( KiB/s): min=12288, max=12288, per=17.69%, avg=12288.00, stdev= 0.00, samples=2 00:10:36.034 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:36.034 lat (usec) : 250=0.02% 00:10:36.034 lat (msec) : 4=0.31%, 10=12.03%, 20=30.90%, 50=55.02%, 100=1.72% 00:10:36.034 cpu : usr=1.99%, sys=2.98%, ctx=370, majf=0, minf=2 00:10:36.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:36.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.034 issued rwts: total=2728,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.034 job2: (groupid=0, jobs=1): err= 0: pid=1652914: Wed Nov 27 05:32:23 2024 00:10:36.034 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:36.034 slat (nsec): min=1150, max=43621k, avg=138090.88, stdev=1056250.33 00:10:36.034 clat (usec): min=8141, max=53341, avg=17522.67, stdev=8790.51 00:10:36.034 lat (usec): min=8145, max=53345, avg=17660.76, stdev=8824.59 00:10:36.034 clat percentiles (usec): 00:10:36.034 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[11731], 20.00th=[12125], 00:10:36.034 | 30.00th=[12387], 40.00th=[13566], 50.00th=[14091], 60.00th=[15664], 00:10:36.034 | 70.00th=[19792], 80.00th=[21365], 90.00th=[22676], 95.00th=[35390], 00:10:36.034 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:10:36.034 | 99.99th=[53216] 00:10:36.034 write: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1006msec); 0 zone resets 00:10:36.034 slat (nsec): min=1933, max=18098k, avg=127568.86, stdev=782125.55 00:10:36.034 clat (usec): min=2135, max=50815, avg=16561.80, stdev=7975.99 00:10:36.034 lat (usec): min=3931, max=50847, avg=16689.36, stdev=8009.57 00:10:36.034 clat percentiles (usec): 00:10:36.034 | 1.00th=[ 7439], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[11731], 00:10:36.034 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13960], 60.00th=[16188], 00:10:36.034 | 70.00th=[17695], 80.00th=[19530], 90.00th=[25297], 95.00th=[35390], 00:10:36.034 | 99.00th=[46400], 99.50th=[46924], 99.90th=[46924], 99.95th=[50070], 00:10:36.034 | 99.99th=[50594] 00:10:36.034 bw ( KiB/s): min=14696, max=15056, per=21.42%, avg=14876.00, stdev=254.56, samples=2 00:10:36.034 iops : min= 3674, max= 3764, avg=3719.00, stdev=63.64, samples=2 00:10:36.034 lat (msec) : 4=0.20%, 10=7.51%, 20=70.08%, 50=20.48%, 100=1.72% 00:10:36.034 cpu : usr=2.39%, sys=3.28%, ctx=348, majf=0, minf=1 00:10:36.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:36.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.034 issued rwts: total=3584,3847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.034 job3: (groupid=0, jobs=1): err= 0: pid=1652915: Wed Nov 27 05:32:23 2024 00:10:36.034 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:10:36.034 slat (nsec): min=1526, max=28602k, avg=112847.47, stdev=786701.63 00:10:36.034 clat (usec): min=1443, max=48070, avg=14629.34, stdev=6518.13 00:10:36.034 lat (usec): min=3892, max=48086, avg=14742.18, stdev=6552.17 00:10:36.034 clat percentiles (usec): 00:10:36.034 | 1.00th=[ 4817], 5.00th=[ 8356], 10.00th=[10028], 20.00th=[10945], 00:10:36.034 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12518], 60.00th=[13304], 00:10:36.034 | 70.00th=[16057], 80.00th=[17433], 90.00th=[20317], 95.00th=[26084], 00:10:36.034 | 99.00th=[45876], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:10:36.034 | 99.99th=[47973] 00:10:36.034 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:36.034 slat (usec): min=2, max=13134, avg=107.78, stdev=610.47 00:10:36.034 clat (usec): min=5658, max=44625, avg=14194.39, stdev=5985.68 00:10:36.034 lat (usec): min=5668, max=44644, avg=14302.17, stdev=6038.92 00:10:36.034 clat percentiles (usec): 00:10:36.034 | 1.00th=[ 5669], 5.00th=[ 6718], 10.00th=[10421], 20.00th=[10814], 00:10:36.034 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11731], 60.00th=[12911], 00:10:36.034 | 70.00th=[13829], 80.00th=[16712], 90.00th=[24249], 95.00th=[28443], 00:10:36.034 | 99.00th=[32637], 99.50th=[34866], 99.90th=[39060], 99.95th=[41681], 00:10:36.034 | 99.99th=[44827] 00:10:36.034 bw ( KiB/s): min=17248, max=19536, per=26.48%, avg=18392.00, stdev=1617.86, samples=2 00:10:36.034 iops : min= 4312, max= 4884, avg=4598.00, stdev=404.47, samples=2 00:10:36.034 lat (msec) : 2=0.01%, 4=0.08%, 10=8.44%, 20=77.77%, 50=13.69% 00:10:36.034 cpu : usr=4.09%, sys=5.98%, ctx=390, majf=0, minf=1 00:10:36.034 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:36.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.034 issued rwts: total=4214,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.034 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.034 00:10:36.034 Run status group 0 (all jobs): 00:10:36.034 READ: bw=62.7MiB/s (65.7MB/s), 10.6MiB/s-21.9MiB/s (11.1MB/s-22.9MB/s), io=63.1MiB (66.2MB), run=1004-1007msec 00:10:36.034 WRITE: bw=67.8MiB/s (71.1MB/s), 11.9MiB/s-23.1MiB/s (12.5MB/s-24.2MB/s), io=68.3MiB (71.6MB), run=1004-1007msec 00:10:36.034 00:10:36.034 Disk stats (read/write): 00:10:36.034 nvme0n1: ios=4641/4950, merge=0/0, ticks=50121/54405, in_queue=104526, util=96.49% 00:10:36.034 nvme0n2: ios=2417/2560, merge=0/0, ticks=18473/19751, in_queue=38224, util=96.14% 00:10:36.034 nvme0n3: ios=3128/3430, merge=0/0, ticks=17276/17251, in_queue=34527, util=97.19% 00:10:36.034 nvme0n4: ios=3544/3584, merge=0/0, ticks=26490/26704, in_queue=53194, util=96.64% 00:10:36.034 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:36.034 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1653150 00:10:36.034 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:36.034 05:32:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:36.034 [global] 00:10:36.034 thread=1 00:10:36.034 invalidate=1 00:10:36.034 rw=read 00:10:36.034 time_based=1 00:10:36.034 runtime=10 00:10:36.034 ioengine=libaio 00:10:36.034 direct=1 00:10:36.034 bs=4096 00:10:36.034 iodepth=1 00:10:36.034 norandommap=1 00:10:36.034 numjobs=1 00:10:36.034 00:10:36.034 [job0] 00:10:36.034 filename=/dev/nvme0n1 00:10:36.034 [job1] 00:10:36.034 filename=/dev/nvme0n2 00:10:36.034 [job2] 00:10:36.034 filename=/dev/nvme0n3 00:10:36.034 [job3] 00:10:36.034 filename=/dev/nvme0n4 00:10:36.034 Could not set queue depth (nvme0n1) 00:10:36.034 Could not set queue depth (nvme0n2) 00:10:36.034 Could not set queue depth (nvme0n3) 00:10:36.034 Could not set queue depth (nvme0n4) 00:10:36.034 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.034 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.034 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.034 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.034 fio-3.35 00:10:36.034 Starting 4 threads 00:10:39.313 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:39.313 05:32:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:39.313 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=278528, buflen=4096 00:10:39.313 fio: pid=1653299, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.313 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.313 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:39.313 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=536576, buflen=4096 00:10:39.313 fio: pid=1653296, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.571 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.571 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:39.571 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2174976, buflen=4096 00:10:39.571 fio: pid=1653286, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.571 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=59555840, buflen=4096 00:10:39.571 fio: pid=1653290, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.571 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.571 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:39.830 00:10:39.830 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1653286: Wed Nov 27 05:32:27 2024 00:10:39.830 read: IOPS=166, BW=666KiB/s (682kB/s)(2124KiB/3189msec) 00:10:39.830 slat (usec): min=6, max=4885, avg=19.31, stdev=211.45 00:10:39.830 clat (usec): min=168, max=44923, avg=5943.76, stdev=14209.88 00:10:39.830 lat (usec): min=176, max=46025, avg=5963.06, stdev=14238.28 00:10:39.830 clat percentiles (usec): 00:10:39.830 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:10:39.830 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 245], 00:10:39.830 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[41157], 95.00th=[41157], 00:10:39.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:10:39.830 | 99.99th=[44827] 00:10:39.830 bw ( KiB/s): min= 96, max= 3504, per=3.87%, avg=702.00, stdev=1375.59, samples=6 00:10:39.830 iops : min= 24, max= 876, avg=175.50, stdev=343.90, samples=6 00:10:39.830 lat (usec) : 250=68.80%, 500=17.11% 00:10:39.830 lat (msec) : 50=13.91% 00:10:39.830 cpu : usr=0.09%, sys=0.31%, ctx=533, majf=0, minf=1 00:10:39.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.830 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1653290: Wed Nov 27 05:32:27 2024 00:10:39.830 read: IOPS=4321, BW=16.9MiB/s (17.7MB/s)(56.8MiB/3365msec) 00:10:39.830 slat (usec): min=5, max=17026, avg=13.21, stdev=279.25 00:10:39.830 clat (usec): min=171, max=42121, avg=214.72, stdev=591.67 00:10:39.830 lat (usec): min=179, max=57890, avg=227.93, stdev=721.25 00:10:39.830 clat percentiles (usec): 00:10:39.830 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:10:39.830 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 206], 00:10:39.830 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 219], 95.00th=[ 227], 00:10:39.830 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 318], 99.95th=[ 668], 00:10:39.830 | 99.99th=[41157] 00:10:39.830 bw ( KiB/s): min=17536, max=18872, per=100.00%, avg=18395.50, stdev=577.13, samples=6 00:10:39.830 iops : min= 4384, max= 4718, avg=4598.83, stdev=144.34, samples=6 00:10:39.830 lat (usec) : 250=97.11%, 500=2.82%, 750=0.01% 00:10:39.830 lat (msec) : 2=0.03%, 50=0.02% 00:10:39.830 cpu : usr=2.50%, sys=6.75%, ctx=14546, majf=0, minf=2 00:10:39.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 issued rwts: total=14541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.830 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1653296: Wed Nov 27 05:32:27 2024 00:10:39.830 read: IOPS=44, BW=176KiB/s (180kB/s)(524KiB/2978msec) 00:10:39.830 slat (usec): min=8, max=13535, avg=120.15, stdev=1176.62 00:10:39.830 clat (usec): min=216, max=42395, avg=22405.65, stdev=20447.53 00:10:39.830 lat (usec): min=227, max=54697, avg=22526.53, stdev=20575.30 00:10:39.830 clat percentiles (usec): 00:10:39.830 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 245], 00:10:39.830 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[40633], 60.00th=[40633], 00:10:39.830 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:39.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:39.830 | 99.99th=[42206] 00:10:39.830 bw ( KiB/s): min= 120, max= 224, per=0.99%, avg=180.80, stdev=49.51, samples=5 00:10:39.830 iops : min= 30, max= 56, avg=45.20, stdev=12.38, samples=5 00:10:39.830 lat (usec) : 250=27.27%, 500=18.18% 00:10:39.830 lat (msec) : 50=53.79% 00:10:39.830 cpu : usr=0.17%, sys=0.00%, ctx=135, majf=0, minf=2 00:10:39.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 issued rwts: total=132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.830 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1653299: Wed Nov 27 05:32:27 2024 00:10:39.830 read: IOPS=25, BW=99.3KiB/s (102kB/s)(272KiB/2739msec) 00:10:39.830 slat (nsec): min=10451, max=31132, avg=23084.61, stdev=2499.50 00:10:39.830 clat (usec): min=273, max=42049, avg=39936.19, stdev=6947.99 00:10:39.830 lat (usec): min=297, max=42072, avg=39959.25, stdev=6947.19 00:10:39.830 clat percentiles (usec): 00:10:39.830 | 1.00th=[ 273], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:39.830 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:39.830 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:39.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:39.830 | 99.99th=[42206] 00:10:39.830 bw ( KiB/s): min= 96, max= 104, per=0.55%, avg=99.20, stdev= 4.38, samples=5 00:10:39.830 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:39.830 lat (usec) : 500=2.90% 00:10:39.830 lat (msec) : 50=95.65% 00:10:39.830 cpu : usr=0.15%, sys=0.00%, ctx=69, majf=0, minf=2 00:10:39.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.830 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.830 00:10:39.830 Run status group 0 (all jobs): 00:10:39.830 READ: bw=17.7MiB/s (18.6MB/s), 99.3KiB/s-16.9MiB/s (102kB/s-17.7MB/s), io=59.6MiB (62.5MB), run=2739-3365msec 00:10:39.830 00:10:39.830 Disk stats (read/write): 00:10:39.830 nvme0n1: ios=529/0, merge=0/0, ticks=3069/0, in_queue=3069, util=95.53% 00:10:39.830 nvme0n2: ios=14541/0, merge=0/0, ticks=2947/0, in_queue=2947, util=94.26% 00:10:39.830 nvme0n3: ios=154/0, merge=0/0, ticks=3046/0, in_queue=3046, util=97.19% 00:10:39.830 nvme0n4: ios=94/0, merge=0/0, ticks=2662/0, in_queue=2662, util=97.26% 00:10:39.830 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.830 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:40.088 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.088 05:32:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:40.346 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.346 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:40.602 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.602 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:40.602 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:40.602 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1653150 00:10:40.602 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:40.602 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:40.861 nvmf hotplug test: fio failed as expected 00:10:40.861 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.120 05:32:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.120 rmmod nvme_tcp 00:10:41.120 rmmod nvme_fabrics 00:10:41.120 rmmod nvme_keyring 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1650210 ']' 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1650210 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1650210 ']' 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1650210 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650210 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650210' 00:10:41.120 killing process with pid 1650210 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1650210 00:10:41.120 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1650210 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.378 05:32:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.914 00:10:43.914 real 0m27.394s 00:10:43.914 user 1m49.544s 00:10:43.914 sys 0m8.651s 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.914 ************************************ 00:10:43.914 END TEST nvmf_fio_target 00:10:43.914 ************************************ 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.914 ************************************ 00:10:43.914 START TEST nvmf_bdevio 00:10:43.914 ************************************ 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:43.914 * Looking for test storage... 00:10:43.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.914 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.914 --rc genhtml_branch_coverage=1 00:10:43.914 --rc genhtml_function_coverage=1 00:10:43.914 --rc genhtml_legend=1 00:10:43.914 --rc geninfo_all_blocks=1 00:10:43.914 --rc geninfo_unexecuted_blocks=1 00:10:43.914 00:10:43.914 ' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.915 --rc genhtml_branch_coverage=1 00:10:43.915 --rc genhtml_function_coverage=1 00:10:43.915 --rc genhtml_legend=1 00:10:43.915 --rc geninfo_all_blocks=1 00:10:43.915 --rc geninfo_unexecuted_blocks=1 00:10:43.915 00:10:43.915 ' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.915 --rc genhtml_branch_coverage=1 00:10:43.915 --rc genhtml_function_coverage=1 00:10:43.915 --rc genhtml_legend=1 00:10:43.915 --rc geninfo_all_blocks=1 00:10:43.915 --rc geninfo_unexecuted_blocks=1 00:10:43.915 00:10:43.915 ' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.915 --rc genhtml_branch_coverage=1 00:10:43.915 --rc genhtml_function_coverage=1 00:10:43.915 --rc genhtml_legend=1 00:10:43.915 --rc geninfo_all_blocks=1 00:10:43.915 --rc geninfo_unexecuted_blocks=1 00:10:43.915 00:10:43.915 ' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.915 05:32:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.486 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.486 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.486 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.486 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.486 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:50.487 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:50.487 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:50.487 Found net devices under 0000:86:00.0: cvl_0_0 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:50.487 Found net devices under 0000:86:00.1: cvl_0_1 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:10:50.487 00:10:50.487 --- 10.0.0.2 ping statistics --- 00:10:50.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.487 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:50.487 00:10:50.487 --- 10.0.0.1 ping statistics --- 00:10:50.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.487 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1657757 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1657757 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1657757 ']' 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.487 05:32:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.487 [2024-11-27 05:32:37.633568] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:50.487 [2024-11-27 05:32:37.633613] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.487 [2024-11-27 05:32:37.713994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.487 [2024-11-27 05:32:37.756150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.487 [2024-11-27 05:32:37.756186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.487 [2024-11-27 05:32:37.756193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.487 [2024-11-27 05:32:37.756199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.487 [2024-11-27 05:32:37.756205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.487 [2024-11-27 05:32:37.757761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:50.487 [2024-11-27 05:32:37.757870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:50.487 [2024-11-27 05:32:37.757977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.487 [2024-11-27 05:32:37.757978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:50.487 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.487 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:50.487 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.487 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.487 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.747 [2024-11-27 05:32:38.525531] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.747 Malloc0 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.747 [2024-11-27 05:32:38.585656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.747 { 00:10:50.747 "params": { 00:10:50.747 "name": "Nvme$subsystem", 00:10:50.747 "trtype": "$TEST_TRANSPORT", 00:10:50.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.747 "adrfam": "ipv4", 00:10:50.747 "trsvcid": "$NVMF_PORT", 00:10:50.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.747 "hdgst": ${hdgst:-false}, 00:10:50.747 "ddgst": ${ddgst:-false} 00:10:50.747 }, 00:10:50.747 "method": "bdev_nvme_attach_controller" 00:10:50.747 } 00:10:50.747 EOF 00:10:50.747 )") 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:50.747 05:32:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.747 "params": { 00:10:50.747 "name": "Nvme1", 00:10:50.747 "trtype": "tcp", 00:10:50.747 "traddr": "10.0.0.2", 00:10:50.747 "adrfam": "ipv4", 00:10:50.747 "trsvcid": "4420", 00:10:50.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.747 "hdgst": false, 00:10:50.747 "ddgst": false 00:10:50.747 }, 00:10:50.747 "method": "bdev_nvme_attach_controller" 00:10:50.747 }' 00:10:50.747 [2024-11-27 05:32:38.637962] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:10:50.747 [2024-11-27 05:32:38.638009] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658006 ] 00:10:50.747 [2024-11-27 05:32:38.713472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.007 [2024-11-27 05:32:38.756894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.007 [2024-11-27 05:32:38.756999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.007 [2024-11-27 05:32:38.757000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.007 I/O targets: 00:10:51.007 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:51.007 00:10:51.007 00:10:51.007 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.007 http://cunit.sourceforge.net/ 00:10:51.007 00:10:51.007 00:10:51.007 Suite: bdevio tests on: Nvme1n1 00:10:51.267 Test: blockdev write read block ...passed 00:10:51.267 Test: blockdev write zeroes read block ...passed 00:10:51.267 Test: blockdev write zeroes read no split ...passed 00:10:51.267 Test: blockdev write zeroes read split ...passed 00:10:51.267 Test: blockdev write zeroes read split partial ...passed 00:10:51.267 Test: blockdev reset ...[2024-11-27 05:32:39.111462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:51.267 [2024-11-27 05:32:39.111531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x959350 (9): Bad file descriptor 00:10:51.267 [2024-11-27 05:32:39.213700] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:51.267 passed 00:10:51.267 Test: blockdev write read 8 blocks ...passed 00:10:51.526 Test: blockdev write read size > 128k ...passed 00:10:51.526 Test: blockdev write read invalid size ...passed 00:10:51.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.526 Test: blockdev write read max offset ...passed 00:10:51.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.526 Test: blockdev writev readv 8 blocks ...passed 00:10:51.526 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.526 Test: blockdev writev readv block ...passed 00:10:51.526 Test: blockdev writev readv size > 128k ...passed 00:10:51.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.526 Test: blockdev comparev and writev ...[2024-11-27 05:32:39.507651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.507682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:51.526 [2024-11-27 05:32:39.507697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.507706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:51.526 [2024-11-27 05:32:39.507949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.507960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:51.526 [2024-11-27 05:32:39.507972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.507983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:51.526 [2024-11-27 05:32:39.508212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.508222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:51.526 [2024-11-27 05:32:39.508233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.508240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:51.526 [2024-11-27 05:32:39.508468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.508479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:51.526 [2024-11-27 05:32:39.508490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.526 [2024-11-27 05:32:39.508498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:51.786 passed 00:10:51.786 Test: blockdev nvme passthru rw ...passed 00:10:51.786 Test: blockdev nvme passthru vendor specific ...[2024-11-27 05:32:39.589921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.786 [2024-11-27 05:32:39.589939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:51.786 [2024-11-27 05:32:39.590047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.786 [2024-11-27 05:32:39.590057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:51.786 [2024-11-27 05:32:39.590177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.786 [2024-11-27 05:32:39.590187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:51.786 [2024-11-27 05:32:39.590300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.786 [2024-11-27 05:32:39.590309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:51.786 passed 00:10:51.786 Test: blockdev nvme admin passthru ...passed 00:10:51.786 Test: blockdev copy ...passed 00:10:51.786 00:10:51.786 Run Summary: Type Total Ran Passed Failed Inactive 00:10:51.786 suites 1 1 n/a 0 0 00:10:51.786 tests 23 23 23 0 0 00:10:51.786 asserts 152 152 152 0 n/a 00:10:51.786 00:10:51.786 Elapsed time = 1.297 seconds 00:10:51.786 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.786 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.786 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.044 rmmod nvme_tcp 00:10:52.044 rmmod nvme_fabrics 00:10:52.044 rmmod nvme_keyring 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1657757 ']' 00:10:52.044 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1657757 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1657757 ']' 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1657757 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1657757 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1657757' 00:10:52.045 killing process with pid 1657757 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1657757 00:10:52.045 05:32:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1657757 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.304 05:32:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.212 05:32:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.212 00:10:54.212 real 0m10.777s 00:10:54.212 user 0m13.599s 00:10:54.212 sys 0m5.027s 00:10:54.212 05:32:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.212 05:32:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.212 ************************************ 00:10:54.212 END TEST nvmf_bdevio 00:10:54.212 ************************************ 00:10:54.212 05:32:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:54.212 00:10:54.212 real 4m38.442s 00:10:54.212 user 10m29.030s 00:10:54.212 sys 1m37.766s 00:10:54.212 05:32:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.212 05:32:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.212 ************************************ 00:10:54.212 END TEST nvmf_target_core 00:10:54.212 ************************************ 00:10:54.470 05:32:42 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:54.470 05:32:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.470 05:32:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.470 05:32:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.470 ************************************ 00:10:54.470 START TEST nvmf_target_extra 00:10:54.470 ************************************ 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:54.470 * Looking for test storage... 00:10:54.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.470 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.471 --rc genhtml_branch_coverage=1 00:10:54.471 --rc genhtml_function_coverage=1 00:10:54.471 --rc genhtml_legend=1 00:10:54.471 --rc geninfo_all_blocks=1 00:10:54.471 --rc geninfo_unexecuted_blocks=1 00:10:54.471 00:10:54.471 ' 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.471 --rc genhtml_branch_coverage=1 00:10:54.471 --rc genhtml_function_coverage=1 00:10:54.471 --rc genhtml_legend=1 00:10:54.471 --rc geninfo_all_blocks=1 00:10:54.471 --rc geninfo_unexecuted_blocks=1 00:10:54.471 00:10:54.471 ' 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.471 --rc genhtml_branch_coverage=1 00:10:54.471 --rc genhtml_function_coverage=1 00:10:54.471 --rc genhtml_legend=1 00:10:54.471 --rc geninfo_all_blocks=1 00:10:54.471 --rc geninfo_unexecuted_blocks=1 00:10:54.471 00:10:54.471 ' 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.471 --rc genhtml_branch_coverage=1 00:10:54.471 --rc genhtml_function_coverage=1 00:10:54.471 --rc genhtml_legend=1 00:10:54.471 --rc geninfo_all_blocks=1 00:10:54.471 --rc geninfo_unexecuted_blocks=1 00:10:54.471 00:10:54.471 ' 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.471 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.730 05:32:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.731 ************************************ 00:10:54.731 START TEST nvmf_example 00:10:54.731 ************************************ 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:54.731 * Looking for test storage... 00:10:54.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.731 --rc genhtml_branch_coverage=1 00:10:54.731 --rc genhtml_function_coverage=1 00:10:54.731 --rc genhtml_legend=1 00:10:54.731 --rc geninfo_all_blocks=1 00:10:54.731 --rc geninfo_unexecuted_blocks=1 00:10:54.731 00:10:54.731 ' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.731 --rc genhtml_branch_coverage=1 00:10:54.731 --rc genhtml_function_coverage=1 00:10:54.731 --rc genhtml_legend=1 00:10:54.731 --rc geninfo_all_blocks=1 00:10:54.731 --rc geninfo_unexecuted_blocks=1 00:10:54.731 00:10:54.731 ' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.731 --rc genhtml_branch_coverage=1 00:10:54.731 --rc genhtml_function_coverage=1 00:10:54.731 --rc genhtml_legend=1 00:10:54.731 --rc geninfo_all_blocks=1 00:10:54.731 --rc geninfo_unexecuted_blocks=1 00:10:54.731 00:10:54.731 ' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.731 --rc genhtml_branch_coverage=1 00:10:54.731 --rc genhtml_function_coverage=1 00:10:54.731 --rc genhtml_legend=1 00:10:54.731 --rc geninfo_all_blocks=1 00:10:54.731 --rc geninfo_unexecuted_blocks=1 00:10:54.731 00:10:54.731 ' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.731 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.991 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.992 05:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:01.568 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:01.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:01.568 Found net devices under 0000:86:00.0: cvl_0_0 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:01.568 Found net devices under 0000:86:00.1: cvl_0_1 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.568 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:11:01.569 00:11:01.569 --- 10.0.0.2 ping statistics --- 00:11:01.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.569 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:11:01.569 00:11:01.569 --- 10.0.0.1 ping statistics --- 00:11:01.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.569 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1661826 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1661826 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1661826 ']' 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.569 05:32:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:01.827 05:32:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:14.019 Initializing NVMe Controllers 00:11:14.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:14.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:14.019 Initialization complete. Launching workers. 00:11:14.019 ======================================================== 00:11:14.019 Latency(us) 00:11:14.019 Device Information : IOPS MiB/s Average min max 00:11:14.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17951.66 70.12 3564.59 488.86 15525.05 00:11:14.019 ======================================================== 00:11:14.019 Total : 17951.66 70.12 3564.59 488.86 15525.05 00:11:14.019 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.019 rmmod nvme_tcp 00:11:14.019 rmmod nvme_fabrics 00:11:14.019 rmmod nvme_keyring 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1661826 ']' 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1661826 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1661826 ']' 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1661826 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.019 05:32:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661826 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661826' 00:11:14.019 killing process with pid 1661826 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1661826 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1661826 00:11:14.019 nvmf threads initialize successfully 00:11:14.019 bdev subsystem init successfully 00:11:14.019 created a nvmf target service 00:11:14.019 create targets's poll groups done 00:11:14.019 all subsystems of target started 00:11:14.019 nvmf target is running 00:11:14.019 all subsystems of target stopped 00:11:14.019 destroy targets's poll groups done 00:11:14.019 destroyed the nvmf target service 00:11:14.019 bdev subsystem finish successfully 00:11:14.019 nvmf threads destroy successfully 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.019 05:33:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 00:11:14.589 real 0m19.800s 00:11:14.589 user 0m45.896s 00:11:14.589 sys 0m6.116s 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 ************************************ 00:11:14.589 END TEST nvmf_example 00:11:14.589 ************************************ 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:14.589 ************************************ 00:11:14.589 START TEST nvmf_filesystem 00:11:14.589 ************************************ 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:14.589 * Looking for test storage... 00:11:14.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.589 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.589 --rc genhtml_branch_coverage=1 00:11:14.589 --rc genhtml_function_coverage=1 00:11:14.589 --rc genhtml_legend=1 00:11:14.589 --rc geninfo_all_blocks=1 00:11:14.589 --rc geninfo_unexecuted_blocks=1 00:11:14.590 00:11:14.590 ' 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.590 --rc genhtml_branch_coverage=1 00:11:14.590 --rc genhtml_function_coverage=1 00:11:14.590 --rc genhtml_legend=1 00:11:14.590 --rc geninfo_all_blocks=1 00:11:14.590 --rc geninfo_unexecuted_blocks=1 00:11:14.590 00:11:14.590 ' 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.590 --rc genhtml_branch_coverage=1 00:11:14.590 --rc genhtml_function_coverage=1 00:11:14.590 --rc genhtml_legend=1 00:11:14.590 --rc geninfo_all_blocks=1 00:11:14.590 --rc geninfo_unexecuted_blocks=1 00:11:14.590 00:11:14.590 ' 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.590 --rc genhtml_branch_coverage=1 00:11:14.590 --rc genhtml_function_coverage=1 00:11:14.590 --rc genhtml_legend=1 00:11:14.590 --rc geninfo_all_blocks=1 00:11:14.590 --rc geninfo_unexecuted_blocks=1 00:11:14.590 00:11:14.590 ' 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:14.590 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:14.854 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:14.854 #define SPDK_CONFIG_H 00:11:14.854 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:14.855 #define SPDK_CONFIG_APPS 1 00:11:14.855 #define SPDK_CONFIG_ARCH native 00:11:14.855 #undef SPDK_CONFIG_ASAN 00:11:14.855 #undef SPDK_CONFIG_AVAHI 00:11:14.855 #undef SPDK_CONFIG_CET 00:11:14.855 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:14.855 #define SPDK_CONFIG_COVERAGE 1 00:11:14.855 #define SPDK_CONFIG_CROSS_PREFIX 00:11:14.855 #undef SPDK_CONFIG_CRYPTO 00:11:14.855 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:14.855 #undef SPDK_CONFIG_CUSTOMOCF 00:11:14.855 #undef SPDK_CONFIG_DAOS 00:11:14.855 #define SPDK_CONFIG_DAOS_DIR 00:11:14.855 #define SPDK_CONFIG_DEBUG 1 00:11:14.855 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:14.855 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:14.855 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:14.855 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:14.855 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:14.855 #undef SPDK_CONFIG_DPDK_UADK 00:11:14.855 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.855 #define SPDK_CONFIG_EXAMPLES 1 00:11:14.855 #undef SPDK_CONFIG_FC 00:11:14.855 #define SPDK_CONFIG_FC_PATH 00:11:14.855 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:14.855 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:14.855 #define SPDK_CONFIG_FSDEV 1 00:11:14.855 #undef SPDK_CONFIG_FUSE 00:11:14.855 #undef SPDK_CONFIG_FUZZER 00:11:14.855 #define SPDK_CONFIG_FUZZER_LIB 00:11:14.855 #undef SPDK_CONFIG_GOLANG 00:11:14.855 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:14.855 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:14.855 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:14.855 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:14.855 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:14.855 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:14.855 #undef SPDK_CONFIG_HAVE_LZ4 00:11:14.855 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:14.855 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:14.855 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:14.855 #define SPDK_CONFIG_IDXD 1 00:11:14.855 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:14.855 #undef SPDK_CONFIG_IPSEC_MB 00:11:14.855 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:14.855 #define SPDK_CONFIG_ISAL 1 00:11:14.855 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:14.855 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:14.855 #define SPDK_CONFIG_LIBDIR 00:11:14.855 #undef SPDK_CONFIG_LTO 00:11:14.855 #define SPDK_CONFIG_MAX_LCORES 128 00:11:14.855 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:14.855 #define SPDK_CONFIG_NVME_CUSE 1 00:11:14.855 #undef SPDK_CONFIG_OCF 00:11:14.855 #define SPDK_CONFIG_OCF_PATH 00:11:14.855 #define SPDK_CONFIG_OPENSSL_PATH 00:11:14.855 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:14.855 #define SPDK_CONFIG_PGO_DIR 00:11:14.855 #undef SPDK_CONFIG_PGO_USE 00:11:14.855 #define SPDK_CONFIG_PREFIX /usr/local 00:11:14.855 #undef SPDK_CONFIG_RAID5F 00:11:14.855 #undef SPDK_CONFIG_RBD 00:11:14.855 #define SPDK_CONFIG_RDMA 1 00:11:14.855 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:14.855 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:14.855 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:14.855 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:14.855 #define SPDK_CONFIG_SHARED 1 00:11:14.855 #undef SPDK_CONFIG_SMA 00:11:14.855 #define SPDK_CONFIG_TESTS 1 00:11:14.855 #undef SPDK_CONFIG_TSAN 00:11:14.855 #define SPDK_CONFIG_UBLK 1 00:11:14.855 #define SPDK_CONFIG_UBSAN 1 00:11:14.855 #undef SPDK_CONFIG_UNIT_TESTS 00:11:14.855 #undef SPDK_CONFIG_URING 00:11:14.855 #define SPDK_CONFIG_URING_PATH 00:11:14.855 #undef SPDK_CONFIG_URING_ZNS 00:11:14.855 #undef SPDK_CONFIG_USDT 00:11:14.855 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:14.855 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:14.855 #define SPDK_CONFIG_VFIO_USER 1 00:11:14.855 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:14.855 #define SPDK_CONFIG_VHOST 1 00:11:14.855 #define SPDK_CONFIG_VIRTIO 1 00:11:14.855 #undef SPDK_CONFIG_VTUNE 00:11:14.855 #define SPDK_CONFIG_VTUNE_DIR 00:11:14.855 #define SPDK_CONFIG_WERROR 1 00:11:14.855 #define SPDK_CONFIG_WPDK_DIR 00:11:14.855 #undef SPDK_CONFIG_XNVME 00:11:14.855 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:14.855 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:14.856 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1664356 ]] 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1664356 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:14.857 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.55cAPh 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.55cAPh/tests/target /tmp/spdk.55cAPh 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189345333248 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963936768 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6618603520 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97971937280 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981968384 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169744896 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192788992 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981345792 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981968384 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=622592 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596378112 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596390400 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:14.858 * Looking for test storage... 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189345333248 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8833196032 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:14.858 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.859 --rc genhtml_branch_coverage=1 00:11:14.859 --rc genhtml_function_coverage=1 00:11:14.859 --rc genhtml_legend=1 00:11:14.859 --rc geninfo_all_blocks=1 00:11:14.859 --rc geninfo_unexecuted_blocks=1 00:11:14.859 00:11:14.859 ' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.859 --rc genhtml_branch_coverage=1 00:11:14.859 --rc genhtml_function_coverage=1 00:11:14.859 --rc genhtml_legend=1 00:11:14.859 --rc geninfo_all_blocks=1 00:11:14.859 --rc geninfo_unexecuted_blocks=1 00:11:14.859 00:11:14.859 ' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.859 --rc genhtml_branch_coverage=1 00:11:14.859 --rc genhtml_function_coverage=1 00:11:14.859 --rc genhtml_legend=1 00:11:14.859 --rc geninfo_all_blocks=1 00:11:14.859 --rc geninfo_unexecuted_blocks=1 00:11:14.859 00:11:14.859 ' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.859 --rc genhtml_branch_coverage=1 00:11:14.859 --rc genhtml_function_coverage=1 00:11:14.859 --rc genhtml_legend=1 00:11:14.859 --rc geninfo_all_blocks=1 00:11:14.859 --rc geninfo_unexecuted_blocks=1 00:11:14.859 00:11:14.859 ' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.859 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.860 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.119 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.119 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.119 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.119 05:33:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.692 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:21.693 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:21.693 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:21.693 Found net devices under 0000:86:00.0: cvl_0_0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:21.693 Found net devices under 0000:86:00.1: cvl_0_1 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:11:21.693 00:11:21.693 --- 10.0.0.2 ping statistics --- 00:11:21.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.693 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:11:21.693 00:11:21.693 --- 10.0.0.1 ping statistics --- 00:11:21.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.693 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.693 ************************************ 00:11:21.693 START TEST nvmf_filesystem_no_in_capsule 00:11:21.693 ************************************ 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:21.693 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1667551 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1667551 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1667551 ']' 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.694 05:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.694 [2024-11-27 05:33:08.937083] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:21.694 [2024-11-27 05:33:08.937128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.694 [2024-11-27 05:33:09.029554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.694 [2024-11-27 05:33:09.070143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.694 [2024-11-27 05:33:09.070182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.694 [2024-11-27 05:33:09.070192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.694 [2024-11-27 05:33:09.070200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.694 [2024-11-27 05:33:09.070206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.694 [2024-11-27 05:33:09.071730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.694 [2024-11-27 05:33:09.071825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.694 [2024-11-27 05:33:09.071910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.694 [2024-11-27 05:33:09.071911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:21.954 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.955 [2024-11-27 05:33:09.804255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.955 Malloc1 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.955 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.214 [2024-11-27 05:33:09.963877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:22.214 { 00:11:22.214 "name": "Malloc1", 00:11:22.214 "aliases": [ 00:11:22.214 "aa179306-f602-4499-a2a8-c5d7f462f9ec" 00:11:22.214 ], 00:11:22.214 "product_name": "Malloc disk", 00:11:22.214 "block_size": 512, 00:11:22.214 "num_blocks": 1048576, 00:11:22.214 "uuid": "aa179306-f602-4499-a2a8-c5d7f462f9ec", 00:11:22.214 "assigned_rate_limits": { 00:11:22.214 "rw_ios_per_sec": 0, 00:11:22.214 "rw_mbytes_per_sec": 0, 00:11:22.214 "r_mbytes_per_sec": 0, 00:11:22.214 "w_mbytes_per_sec": 0 00:11:22.214 }, 00:11:22.214 "claimed": true, 00:11:22.214 "claim_type": "exclusive_write", 00:11:22.214 "zoned": false, 00:11:22.214 "supported_io_types": { 00:11:22.214 "read": true, 00:11:22.214 "write": true, 00:11:22.214 "unmap": true, 00:11:22.214 "flush": true, 00:11:22.214 "reset": true, 00:11:22.214 "nvme_admin": false, 00:11:22.214 "nvme_io": false, 00:11:22.214 "nvme_io_md": false, 00:11:22.214 "write_zeroes": true, 00:11:22.214 "zcopy": true, 00:11:22.214 "get_zone_info": false, 00:11:22.214 "zone_management": false, 00:11:22.214 "zone_append": false, 00:11:22.214 "compare": false, 00:11:22.214 "compare_and_write": false, 00:11:22.214 "abort": true, 00:11:22.214 "seek_hole": false, 00:11:22.214 "seek_data": false, 00:11:22.214 "copy": true, 00:11:22.214 "nvme_iov_md": false 00:11:22.214 }, 00:11:22.214 "memory_domains": [ 00:11:22.214 { 00:11:22.214 "dma_device_id": "system", 00:11:22.214 "dma_device_type": 1 00:11:22.214 }, 00:11:22.214 { 00:11:22.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.214 "dma_device_type": 2 00:11:22.214 } 00:11:22.214 ], 00:11:22.214 "driver_specific": {} 00:11:22.214 } 00:11:22.214 ]' 00:11:22.214 05:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:22.214 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:22.214 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:22.214 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:22.214 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:22.214 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:22.214 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:22.214 05:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.592 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.592 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:23.592 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.593 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:23.593 05:33:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:25.498 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:25.498 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:25.498 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.498 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:25.498 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:25.499 05:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:26.067 05:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.449 ************************************ 00:11:27.449 START TEST filesystem_ext4 00:11:27.449 ************************************ 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:27.449 05:33:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:27.449 mke2fs 1.47.0 (5-Feb-2023) 00:11:27.449 Discarding device blocks: 0/522240 done 00:11:27.450 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:27.450 Filesystem UUID: 0bf0658d-75a3-4175-8abe-a06a99d2e15e 00:11:27.450 Superblock backups stored on blocks: 00:11:27.450 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:27.450 00:11:27.450 Allocating group tables: 0/64 done 00:11:27.450 Writing inode tables: 0/64 done 00:11:28.017 Creating journal (8192 blocks): done 00:11:29.473 Writing superblocks and filesystem accounting information: 0/64 done 00:11:29.473 00:11:29.473 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:29.473 05:33:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1667551 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.044 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.044 00:11:36.044 real 0m8.444s 00:11:36.044 user 0m0.033s 00:11:36.044 sys 0m0.066s 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:36.045 ************************************ 00:11:36.045 END TEST filesystem_ext4 00:11:36.045 ************************************ 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.045 ************************************ 00:11:36.045 START TEST filesystem_btrfs 00:11:36.045 ************************************ 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:36.045 btrfs-progs v6.8.1 00:11:36.045 See https://btrfs.readthedocs.io for more information. 00:11:36.045 00:11:36.045 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:36.045 NOTE: several default settings have changed in version 5.15, please make sure 00:11:36.045 this does not affect your deployments: 00:11:36.045 - DUP for metadata (-m dup) 00:11:36.045 - enabled no-holes (-O no-holes) 00:11:36.045 - enabled free-space-tree (-R free-space-tree) 00:11:36.045 00:11:36.045 Label: (null) 00:11:36.045 UUID: 6d8e9512-284b-4b80-830b-c06b38ee8200 00:11:36.045 Node size: 16384 00:11:36.045 Sector size: 4096 (CPU page size: 4096) 00:11:36.045 Filesystem size: 510.00MiB 00:11:36.045 Block group profiles: 00:11:36.045 Data: single 8.00MiB 00:11:36.045 Metadata: DUP 32.00MiB 00:11:36.045 System: DUP 8.00MiB 00:11:36.045 SSD detected: yes 00:11:36.045 Zoned device: no 00:11:36.045 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:36.045 Checksum: crc32c 00:11:36.045 Number of devices: 1 00:11:36.045 Devices: 00:11:36.045 ID SIZE PATH 00:11:36.045 1 510.00MiB /dev/nvme0n1p1 00:11:36.045 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:36.045 05:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.045 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1667551 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.305 00:11:36.305 real 0m0.503s 00:11:36.305 user 0m0.040s 00:11:36.305 sys 0m0.102s 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.305 ************************************ 00:11:36.305 END TEST filesystem_btrfs 00:11:36.305 ************************************ 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.305 ************************************ 00:11:36.305 START TEST filesystem_xfs 00:11:36.305 ************************************ 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.305 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:36.306 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:36.306 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:36.306 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:36.306 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:36.306 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:36.306 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:36.306 05:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:36.306 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:36.306 = sectsz=512 attr=2, projid32bit=1 00:11:36.306 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:36.306 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:36.306 data = bsize=4096 blocks=130560, imaxpct=25 00:11:36.306 = sunit=0 swidth=0 blks 00:11:36.306 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:36.306 log =internal log bsize=4096 blocks=16384, version=2 00:11:36.306 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:36.306 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:37.265 Discarding blocks...Done. 00:11:37.265 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:37.265 05:33:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1667551 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.169 00:11:39.169 real 0m2.945s 00:11:39.169 user 0m0.028s 00:11:39.169 sys 0m0.070s 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.169 ************************************ 00:11:39.169 END TEST filesystem_xfs 00:11:39.169 ************************************ 00:11:39.169 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1667551 00:11:39.427 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1667551 ']' 00:11:39.428 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1667551 00:11:39.428 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:39.428 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.428 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667551 00:11:39.686 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.686 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.686 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667551' 00:11:39.686 killing process with pid 1667551 00:11:39.686 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1667551 00:11:39.686 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1667551 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:39.946 00:11:39.946 real 0m18.880s 00:11:39.946 user 1m14.466s 00:11:39.946 sys 0m1.434s 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 ************************************ 00:11:39.946 END TEST nvmf_filesystem_no_in_capsule 00:11:39.946 ************************************ 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 ************************************ 00:11:39.946 START TEST nvmf_filesystem_in_capsule 00:11:39.946 ************************************ 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1671225 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1671225 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1671225 ']' 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.946 05:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.946 [2024-11-27 05:33:27.887201] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:11:39.946 [2024-11-27 05:33:27.887238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.207 [2024-11-27 05:33:27.962952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.207 [2024-11-27 05:33:28.005068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.207 [2024-11-27 05:33:28.005109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.207 [2024-11-27 05:33:28.005116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.207 [2024-11-27 05:33:28.005122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.207 [2024-11-27 05:33:28.005127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.207 [2024-11-27 05:33:28.006753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.207 [2024-11-27 05:33:28.006861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.207 [2024-11-27 05:33:28.006957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.207 [2024-11-27 05:33:28.006957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.207 [2024-11-27 05:33:28.141471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.207 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.467 Malloc1 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.467 [2024-11-27 05:33:28.298845] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.467 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:40.467 { 00:11:40.467 "name": "Malloc1", 00:11:40.467 "aliases": [ 00:11:40.467 "cf7f3877-9639-4797-aafc-7711789ac8e7" 00:11:40.467 ], 00:11:40.467 "product_name": "Malloc disk", 00:11:40.467 "block_size": 512, 00:11:40.467 "num_blocks": 1048576, 00:11:40.468 "uuid": "cf7f3877-9639-4797-aafc-7711789ac8e7", 00:11:40.468 "assigned_rate_limits": { 00:11:40.468 "rw_ios_per_sec": 0, 00:11:40.468 "rw_mbytes_per_sec": 0, 00:11:40.468 "r_mbytes_per_sec": 0, 00:11:40.468 "w_mbytes_per_sec": 0 00:11:40.468 }, 00:11:40.468 "claimed": true, 00:11:40.468 "claim_type": "exclusive_write", 00:11:40.468 "zoned": false, 00:11:40.468 "supported_io_types": { 00:11:40.468 "read": true, 00:11:40.468 "write": true, 00:11:40.468 "unmap": true, 00:11:40.468 "flush": true, 00:11:40.468 "reset": true, 00:11:40.468 "nvme_admin": false, 00:11:40.468 "nvme_io": false, 00:11:40.468 "nvme_io_md": false, 00:11:40.468 "write_zeroes": true, 00:11:40.468 "zcopy": true, 00:11:40.468 "get_zone_info": false, 00:11:40.468 "zone_management": false, 00:11:40.468 "zone_append": false, 00:11:40.468 "compare": false, 00:11:40.468 "compare_and_write": false, 00:11:40.468 "abort": true, 00:11:40.468 "seek_hole": false, 00:11:40.468 "seek_data": false, 00:11:40.468 "copy": true, 00:11:40.468 "nvme_iov_md": false 00:11:40.468 }, 00:11:40.468 "memory_domains": [ 00:11:40.468 { 00:11:40.468 "dma_device_id": "system", 00:11:40.468 "dma_device_type": 1 00:11:40.468 }, 00:11:40.468 { 00:11:40.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.468 "dma_device_type": 2 00:11:40.468 } 00:11:40.468 ], 00:11:40.468 "driver_specific": {} 00:11:40.468 } 00:11:40.468 ]' 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:40.468 05:33:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.847 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.847 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.847 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.847 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.847 05:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:43.753 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:44.013 05:33:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:44.951 05:33:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.890 ************************************ 00:11:45.890 START TEST filesystem_in_capsule_ext4 00:11:45.890 ************************************ 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:45.890 05:33:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:45.890 mke2fs 1.47.0 (5-Feb-2023) 00:11:45.890 Discarding device blocks: 0/522240 done 00:11:45.890 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:45.890 Filesystem UUID: 6aabaaf1-054c-43f5-a7b9-e4be04c15060 00:11:45.891 Superblock backups stored on blocks: 00:11:45.891 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:45.891 00:11:45.891 Allocating group tables: 0/64 done 00:11:45.891 Writing inode tables: 0/64 done 00:11:46.828 Creating journal (8192 blocks): done 00:11:46.828 Writing superblocks and filesystem accounting information: 0/64 done 00:11:46.828 00:11:46.828 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:46.828 05:33:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1671225 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.099 00:11:52.099 real 0m6.303s 00:11:52.099 user 0m0.018s 00:11:52.099 sys 0m0.083s 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:52.099 ************************************ 00:11:52.099 END TEST filesystem_in_capsule_ext4 00:11:52.099 ************************************ 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.099 05:33:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.099 ************************************ 00:11:52.099 START TEST filesystem_in_capsule_btrfs 00:11:52.099 ************************************ 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:52.099 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:52.100 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:52.358 btrfs-progs v6.8.1 00:11:52.358 See https://btrfs.readthedocs.io for more information. 00:11:52.358 00:11:52.358 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:52.358 NOTE: several default settings have changed in version 5.15, please make sure 00:11:52.358 this does not affect your deployments: 00:11:52.358 - DUP for metadata (-m dup) 00:11:52.358 - enabled no-holes (-O no-holes) 00:11:52.358 - enabled free-space-tree (-R free-space-tree) 00:11:52.358 00:11:52.358 Label: (null) 00:11:52.358 UUID: c261ee9b-2d87-4808-9548-a7d4d5f868f2 00:11:52.358 Node size: 16384 00:11:52.358 Sector size: 4096 (CPU page size: 4096) 00:11:52.358 Filesystem size: 510.00MiB 00:11:52.358 Block group profiles: 00:11:52.358 Data: single 8.00MiB 00:11:52.358 Metadata: DUP 32.00MiB 00:11:52.358 System: DUP 8.00MiB 00:11:52.358 SSD detected: yes 00:11:52.358 Zoned device: no 00:11:52.358 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:52.358 Checksum: crc32c 00:11:52.358 Number of devices: 1 00:11:52.358 Devices: 00:11:52.358 ID SIZE PATH 00:11:52.358 1 510.00MiB /dev/nvme0n1p1 00:11:52.358 00:11:52.358 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:52.358 05:33:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1671225 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.296 00:11:53.296 real 0m1.159s 00:11:53.296 user 0m0.032s 00:11:53.296 sys 0m0.112s 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.296 ************************************ 00:11:53.296 END TEST filesystem_in_capsule_btrfs 00:11:53.296 ************************************ 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.296 ************************************ 00:11:53.296 START TEST filesystem_in_capsule_xfs 00:11:53.296 ************************************ 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:53.296 05:33:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:53.865 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:53.865 = sectsz=512 attr=2, projid32bit=1 00:11:53.865 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:53.865 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:53.865 data = bsize=4096 blocks=130560, imaxpct=25 00:11:53.865 = sunit=0 swidth=0 blks 00:11:53.865 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:53.865 log =internal log bsize=4096 blocks=16384, version=2 00:11:53.865 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:53.865 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:54.433 Discarding blocks...Done. 00:11:54.433 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:54.433 05:33:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1671225 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.970 00:11:56.970 real 0m3.448s 00:11:56.970 user 0m0.029s 00:11:56.970 sys 0m0.067s 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.970 ************************************ 00:11:56.970 END TEST filesystem_in_capsule_xfs 00:11:56.970 ************************************ 00:11:56.970 05:33:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1671225 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1671225 ']' 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1671225 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1671225 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1671225' 00:11:57.230 killing process with pid 1671225 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1671225 00:11:57.230 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1671225 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:57.799 00:11:57.799 real 0m17.720s 00:11:57.799 user 1m9.709s 00:11:57.799 sys 0m1.467s 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 ************************************ 00:11:57.799 END TEST nvmf_filesystem_in_capsule 00:11:57.799 ************************************ 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.799 rmmod nvme_tcp 00:11:57.799 rmmod nvme_fabrics 00:11:57.799 rmmod nvme_keyring 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.799 05:33:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.335 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.335 00:12:00.335 real 0m45.318s 00:12:00.335 user 2m26.278s 00:12:00.335 sys 0m7.548s 00:12:00.335 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.335 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.335 ************************************ 00:12:00.335 END TEST nvmf_filesystem 00:12:00.335 ************************************ 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.336 ************************************ 00:12:00.336 START TEST nvmf_target_discovery 00:12:00.336 ************************************ 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:00.336 * Looking for test storage... 00:12:00.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.336 --rc genhtml_branch_coverage=1 00:12:00.336 --rc genhtml_function_coverage=1 00:12:00.336 --rc genhtml_legend=1 00:12:00.336 --rc geninfo_all_blocks=1 00:12:00.336 --rc geninfo_unexecuted_blocks=1 00:12:00.336 00:12:00.336 ' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.336 --rc genhtml_branch_coverage=1 00:12:00.336 --rc genhtml_function_coverage=1 00:12:00.336 --rc genhtml_legend=1 00:12:00.336 --rc geninfo_all_blocks=1 00:12:00.336 --rc geninfo_unexecuted_blocks=1 00:12:00.336 00:12:00.336 ' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.336 --rc genhtml_branch_coverage=1 00:12:00.336 --rc genhtml_function_coverage=1 00:12:00.336 --rc genhtml_legend=1 00:12:00.336 --rc geninfo_all_blocks=1 00:12:00.336 --rc geninfo_unexecuted_blocks=1 00:12:00.336 00:12:00.336 ' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.336 --rc genhtml_branch_coverage=1 00:12:00.336 --rc genhtml_function_coverage=1 00:12:00.336 --rc genhtml_legend=1 00:12:00.336 --rc geninfo_all_blocks=1 00:12:00.336 --rc geninfo_unexecuted_blocks=1 00:12:00.336 00:12:00.336 ' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:00.336 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.337 05:33:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.337 05:33:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.908 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.908 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.908 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:06.909 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:06.909 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:06.909 Found net devices under 0000:86:00.0: cvl_0_0 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:06.909 Found net devices under 0000:86:00.1: cvl_0_1 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:06.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:12:06.909 00:12:06.909 --- 10.0.0.2 ping statistics --- 00:12:06.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.909 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:12:06.909 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:12:06.909 00:12:06.909 --- 10.0.0.1 ping statistics --- 00:12:06.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.910 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1677745 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1677745 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1677745 ']' 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.910 05:33:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 [2024-11-27 05:33:53.969364] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:06.910 [2024-11-27 05:33:53.969407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.910 [2024-11-27 05:33:54.046150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.910 [2024-11-27 05:33:54.088871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.910 [2024-11-27 05:33:54.088909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.910 [2024-11-27 05:33:54.088919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.910 [2024-11-27 05:33:54.088926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.910 [2024-11-27 05:33:54.088932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.910 [2024-11-27 05:33:54.090556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.910 [2024-11-27 05:33:54.090664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.910 [2024-11-27 05:33:54.090770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.910 [2024-11-27 05:33:54.090771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 [2024-11-27 05:33:54.229252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 Null1 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 [2024-11-27 05:33:54.281824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 Null2 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 Null3 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.910 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 Null4 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:06.911 00:12:06.911 Discovery Log Number of Records 6, Generation counter 6 00:12:06.911 =====Discovery Log Entry 0====== 00:12:06.911 trtype: tcp 00:12:06.911 adrfam: ipv4 00:12:06.911 subtype: current discovery subsystem 00:12:06.911 treq: not required 00:12:06.911 portid: 0 00:12:06.911 trsvcid: 4420 00:12:06.911 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:06.911 traddr: 10.0.0.2 00:12:06.911 eflags: explicit discovery connections, duplicate discovery information 00:12:06.911 sectype: none 00:12:06.911 =====Discovery Log Entry 1====== 00:12:06.911 trtype: tcp 00:12:06.911 adrfam: ipv4 00:12:06.911 subtype: nvme subsystem 00:12:06.911 treq: not required 00:12:06.911 portid: 0 00:12:06.911 trsvcid: 4420 00:12:06.911 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:06.911 traddr: 10.0.0.2 00:12:06.911 eflags: none 00:12:06.911 sectype: none 00:12:06.911 =====Discovery Log Entry 2====== 00:12:06.911 trtype: tcp 00:12:06.911 adrfam: ipv4 00:12:06.911 subtype: nvme subsystem 00:12:06.911 treq: not required 00:12:06.911 portid: 0 00:12:06.911 trsvcid: 4420 00:12:06.911 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:06.911 traddr: 10.0.0.2 00:12:06.911 eflags: none 00:12:06.911 sectype: none 00:12:06.911 =====Discovery Log Entry 3====== 00:12:06.911 trtype: tcp 00:12:06.911 adrfam: ipv4 00:12:06.911 subtype: nvme subsystem 00:12:06.911 treq: not required 00:12:06.911 portid: 0 00:12:06.911 trsvcid: 4420 00:12:06.911 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:06.911 traddr: 10.0.0.2 00:12:06.911 eflags: none 00:12:06.911 sectype: none 00:12:06.911 =====Discovery Log Entry 4====== 00:12:06.911 trtype: tcp 00:12:06.911 adrfam: ipv4 00:12:06.911 subtype: nvme subsystem 00:12:06.911 treq: not required 00:12:06.911 portid: 0 00:12:06.911 trsvcid: 4420 00:12:06.911 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:06.911 traddr: 10.0.0.2 00:12:06.911 eflags: none 00:12:06.911 sectype: none 00:12:06.911 =====Discovery Log Entry 5====== 00:12:06.911 trtype: tcp 00:12:06.911 adrfam: ipv4 00:12:06.911 subtype: discovery subsystem referral 00:12:06.911 treq: not required 00:12:06.911 portid: 0 00:12:06.911 trsvcid: 4430 00:12:06.911 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:06.911 traddr: 10.0.0.2 00:12:06.911 eflags: none 00:12:06.911 sectype: none 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:06.911 Perform nvmf subsystem discovery via RPC 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.911 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 [ 00:12:06.911 { 00:12:06.911 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:06.911 "subtype": "Discovery", 00:12:06.911 "listen_addresses": [ 00:12:06.911 { 00:12:06.911 "trtype": "TCP", 00:12:06.911 "adrfam": "IPv4", 00:12:06.911 "traddr": "10.0.0.2", 00:12:06.911 "trsvcid": "4420" 00:12:06.911 } 00:12:06.911 ], 00:12:06.911 "allow_any_host": true, 00:12:06.911 "hosts": [] 00:12:06.911 }, 00:12:06.911 { 00:12:06.911 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.911 "subtype": "NVMe", 00:12:06.911 "listen_addresses": [ 00:12:06.911 { 00:12:06.911 "trtype": "TCP", 00:12:06.911 "adrfam": "IPv4", 00:12:06.911 "traddr": "10.0.0.2", 00:12:06.911 "trsvcid": "4420" 00:12:06.911 } 00:12:06.911 ], 00:12:06.911 "allow_any_host": true, 00:12:06.911 "hosts": [], 00:12:06.911 "serial_number": "SPDK00000000000001", 00:12:06.911 "model_number": "SPDK bdev Controller", 00:12:06.911 "max_namespaces": 32, 00:12:06.911 "min_cntlid": 1, 00:12:06.911 "max_cntlid": 65519, 00:12:06.911 "namespaces": [ 00:12:06.911 { 00:12:06.911 "nsid": 1, 00:12:06.911 "bdev_name": "Null1", 00:12:06.911 "name": "Null1", 00:12:06.911 "nguid": "C8C3B4B8115D420983C0B8483EC48D04", 00:12:06.911 "uuid": "c8c3b4b8-115d-4209-83c0-b8483ec48d04" 00:12:06.911 } 00:12:06.911 ] 00:12:06.911 }, 00:12:06.911 { 00:12:06.911 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:06.911 "subtype": "NVMe", 00:12:06.911 "listen_addresses": [ 00:12:06.911 { 00:12:06.911 "trtype": "TCP", 00:12:06.911 "adrfam": "IPv4", 00:12:06.911 "traddr": "10.0.0.2", 00:12:06.911 "trsvcid": "4420" 00:12:06.911 } 00:12:06.911 ], 00:12:06.911 "allow_any_host": true, 00:12:06.911 "hosts": [], 00:12:06.911 "serial_number": "SPDK00000000000002", 00:12:06.911 "model_number": "SPDK bdev Controller", 00:12:06.911 "max_namespaces": 32, 00:12:06.911 "min_cntlid": 1, 00:12:06.911 "max_cntlid": 65519, 00:12:06.911 "namespaces": [ 00:12:06.911 { 00:12:06.911 "nsid": 1, 00:12:06.911 "bdev_name": "Null2", 00:12:06.911 "name": "Null2", 00:12:06.911 "nguid": "F3AB1EF0E5B14CB08506FE045B66F4B6", 00:12:06.911 "uuid": "f3ab1ef0-e5b1-4cb0-8506-fe045b66f4b6" 00:12:06.911 } 00:12:06.911 ] 00:12:06.911 }, 00:12:06.911 { 00:12:06.911 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:06.911 "subtype": "NVMe", 00:12:06.911 "listen_addresses": [ 00:12:06.911 { 00:12:06.911 "trtype": "TCP", 00:12:06.911 "adrfam": "IPv4", 00:12:06.911 "traddr": "10.0.0.2", 00:12:06.911 "trsvcid": "4420" 00:12:06.911 } 00:12:06.911 ], 00:12:06.911 "allow_any_host": true, 00:12:06.911 "hosts": [], 00:12:06.911 "serial_number": "SPDK00000000000003", 00:12:06.911 "model_number": "SPDK bdev Controller", 00:12:06.911 "max_namespaces": 32, 00:12:06.911 "min_cntlid": 1, 00:12:06.911 "max_cntlid": 65519, 00:12:06.911 "namespaces": [ 00:12:06.911 { 00:12:06.911 "nsid": 1, 00:12:06.911 "bdev_name": "Null3", 00:12:06.911 "name": "Null3", 00:12:06.911 "nguid": "06AF1E25D1CE4C1598EB54D9CF86C2A2", 00:12:06.911 "uuid": "06af1e25-d1ce-4c15-98eb-54d9cf86c2a2" 00:12:06.911 } 00:12:06.911 ] 00:12:06.911 }, 00:12:06.911 { 00:12:06.911 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:06.911 "subtype": "NVMe", 00:12:06.911 "listen_addresses": [ 00:12:06.911 { 00:12:06.911 "trtype": "TCP", 00:12:06.911 "adrfam": "IPv4", 00:12:06.911 "traddr": "10.0.0.2", 00:12:06.911 "trsvcid": "4420" 00:12:06.911 } 00:12:06.911 ], 00:12:06.911 "allow_any_host": true, 00:12:06.911 "hosts": [], 00:12:06.911 "serial_number": "SPDK00000000000004", 00:12:06.911 "model_number": "SPDK bdev Controller", 00:12:06.911 "max_namespaces": 32, 00:12:06.911 "min_cntlid": 1, 00:12:06.911 "max_cntlid": 65519, 00:12:06.911 "namespaces": [ 00:12:06.911 { 00:12:06.911 "nsid": 1, 00:12:06.911 "bdev_name": "Null4", 00:12:06.911 "name": "Null4", 00:12:06.911 "nguid": "BAC9E0F3608C450B875338C712D9EFBC", 00:12:06.911 "uuid": "bac9e0f3-608c-450b-8753-38c712d9efbc" 00:12:06.911 } 00:12:06.911 ] 00:12:06.911 } 00:12:06.912 ] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.912 rmmod nvme_tcp 00:12:06.912 rmmod nvme_fabrics 00:12:06.912 rmmod nvme_keyring 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1677745 ']' 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1677745 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1677745 ']' 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1677745 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1677745 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1677745' 00:12:06.912 killing process with pid 1677745 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1677745 00:12:06.912 05:33:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1677745 00:12:07.171 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.171 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.171 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.171 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.172 05:33:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.709 00:12:09.709 real 0m9.343s 00:12:09.709 user 0m5.735s 00:12:09.709 sys 0m4.818s 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.709 ************************************ 00:12:09.709 END TEST nvmf_target_discovery 00:12:09.709 ************************************ 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.709 ************************************ 00:12:09.709 START TEST nvmf_referrals 00:12:09.709 ************************************ 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:09.709 * Looking for test storage... 00:12:09.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.709 --rc genhtml_branch_coverage=1 00:12:09.709 --rc genhtml_function_coverage=1 00:12:09.709 --rc genhtml_legend=1 00:12:09.709 --rc geninfo_all_blocks=1 00:12:09.709 --rc geninfo_unexecuted_blocks=1 00:12:09.709 00:12:09.709 ' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.709 --rc genhtml_branch_coverage=1 00:12:09.709 --rc genhtml_function_coverage=1 00:12:09.709 --rc genhtml_legend=1 00:12:09.709 --rc geninfo_all_blocks=1 00:12:09.709 --rc geninfo_unexecuted_blocks=1 00:12:09.709 00:12:09.709 ' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.709 --rc genhtml_branch_coverage=1 00:12:09.709 --rc genhtml_function_coverage=1 00:12:09.709 --rc genhtml_legend=1 00:12:09.709 --rc geninfo_all_blocks=1 00:12:09.709 --rc geninfo_unexecuted_blocks=1 00:12:09.709 00:12:09.709 ' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.709 --rc genhtml_branch_coverage=1 00:12:09.709 --rc genhtml_function_coverage=1 00:12:09.709 --rc genhtml_legend=1 00:12:09.709 --rc geninfo_all_blocks=1 00:12:09.709 --rc geninfo_unexecuted_blocks=1 00:12:09.709 00:12:09.709 ' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.709 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:09.710 05:33:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:16.430 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.430 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:16.430 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:16.431 Found net devices under 0000:86:00.0: cvl_0_0 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:16.431 Found net devices under 0000:86:00.1: cvl_0_1 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:16.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:12:16.431 00:12:16.431 --- 10.0.0.2 ping statistics --- 00:12:16.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.431 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:12:16.431 00:12:16.431 --- 10.0.0.1 ping statistics --- 00:12:16.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.431 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1681523 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1681523 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1681523 ']' 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.431 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 [2024-11-27 05:34:03.456991] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:16.432 [2024-11-27 05:34:03.457037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.432 [2024-11-27 05:34:03.533322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.432 [2024-11-27 05:34:03.575881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.432 [2024-11-27 05:34:03.575921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.432 [2024-11-27 05:34:03.575930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.432 [2024-11-27 05:34:03.575938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.432 [2024-11-27 05:34:03.575944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.432 [2024-11-27 05:34:03.577564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.432 [2024-11-27 05:34:03.577686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.432 [2024-11-27 05:34:03.577780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.432 [2024-11-27 05:34:03.577781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 [2024-11-27 05:34:03.720314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 [2024-11-27 05:34:03.745844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.432 05:34:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.432 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.433 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.777 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.778 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.037 05:34:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.296 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.555 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.814 rmmod nvme_tcp 00:12:17.814 rmmod nvme_fabrics 00:12:17.814 rmmod nvme_keyring 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1681523 ']' 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1681523 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1681523 ']' 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1681523 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.814 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1681523 00:12:18.073 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.073 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1681523' 00:12:18.074 killing process with pid 1681523 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1681523 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1681523 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.074 05:34:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.614 00:12:20.614 real 0m10.852s 00:12:20.614 user 0m12.218s 00:12:20.614 sys 0m5.218s 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:20.614 ************************************ 00:12:20.614 END TEST nvmf_referrals 00:12:20.614 ************************************ 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.614 ************************************ 00:12:20.614 START TEST nvmf_connect_disconnect 00:12:20.614 ************************************ 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:20.614 * Looking for test storage... 00:12:20.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:20.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.614 --rc genhtml_branch_coverage=1 00:12:20.614 --rc genhtml_function_coverage=1 00:12:20.614 --rc genhtml_legend=1 00:12:20.614 --rc geninfo_all_blocks=1 00:12:20.614 --rc geninfo_unexecuted_blocks=1 00:12:20.614 00:12:20.614 ' 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:20.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.614 --rc genhtml_branch_coverage=1 00:12:20.614 --rc genhtml_function_coverage=1 00:12:20.614 --rc genhtml_legend=1 00:12:20.614 --rc geninfo_all_blocks=1 00:12:20.614 --rc geninfo_unexecuted_blocks=1 00:12:20.614 00:12:20.614 ' 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:20.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.614 --rc genhtml_branch_coverage=1 00:12:20.614 --rc genhtml_function_coverage=1 00:12:20.614 --rc genhtml_legend=1 00:12:20.614 --rc geninfo_all_blocks=1 00:12:20.614 --rc geninfo_unexecuted_blocks=1 00:12:20.614 00:12:20.614 ' 00:12:20.614 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:20.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.614 --rc genhtml_branch_coverage=1 00:12:20.614 --rc genhtml_function_coverage=1 00:12:20.614 --rc genhtml_legend=1 00:12:20.614 --rc geninfo_all_blocks=1 00:12:20.614 --rc geninfo_unexecuted_blocks=1 00:12:20.614 00:12:20.614 ' 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.615 05:34:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.190 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.190 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.191 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.191 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.191 05:34:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.463 ms 00:12:27.191 00:12:27.191 --- 10.0.0.2 ping statistics --- 00:12:27.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.191 rtt min/avg/max/mdev = 0.463/0.463/0.463/0.000 ms 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:27.191 00:12:27.191 --- 10.0.0.1 ping statistics --- 00:12:27.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.191 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1685570 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1685570 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1685570 ']' 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.191 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.191 [2024-11-27 05:34:14.335909] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:27.191 [2024-11-27 05:34:14.335953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.191 [2024-11-27 05:34:14.415615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.191 [2024-11-27 05:34:14.457485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.191 [2024-11-27 05:34:14.457526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.191 [2024-11-27 05:34:14.457535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.191 [2024-11-27 05:34:14.457542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.191 [2024-11-27 05:34:14.457548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.191 [2024-11-27 05:34:14.459173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.191 [2024-11-27 05:34:14.459281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.192 [2024-11-27 05:34:14.459392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.192 [2024-11-27 05:34:14.459392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.192 [2024-11-27 05:34:14.596501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.192 [2024-11-27 05:34:14.661336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:27.192 05:34:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:30.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:42.897 rmmod nvme_tcp 00:12:42.897 rmmod nvme_fabrics 00:12:42.897 rmmod nvme_keyring 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1685570 ']' 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1685570 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1685570 ']' 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1685570 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685570 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685570' 00:12:42.897 killing process with pid 1685570 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1685570 00:12:42.897 05:34:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1685570 00:12:43.157 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:43.157 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:43.157 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:43.157 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:43.157 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:43.157 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:43.158 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:43.158 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:43.158 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:43.158 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.158 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.158 05:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:45.695 00:12:45.695 real 0m24.998s 00:12:45.695 user 1m7.657s 00:12:45.695 sys 0m5.756s 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.695 ************************************ 00:12:45.695 END TEST nvmf_connect_disconnect 00:12:45.695 ************************************ 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.695 ************************************ 00:12:45.695 START TEST nvmf_multitarget 00:12:45.695 ************************************ 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.695 * Looking for test storage... 00:12:45.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.695 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.696 --rc genhtml_branch_coverage=1 00:12:45.696 --rc genhtml_function_coverage=1 00:12:45.696 --rc genhtml_legend=1 00:12:45.696 --rc geninfo_all_blocks=1 00:12:45.696 --rc geninfo_unexecuted_blocks=1 00:12:45.696 00:12:45.696 ' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.696 --rc genhtml_branch_coverage=1 00:12:45.696 --rc genhtml_function_coverage=1 00:12:45.696 --rc genhtml_legend=1 00:12:45.696 --rc geninfo_all_blocks=1 00:12:45.696 --rc geninfo_unexecuted_blocks=1 00:12:45.696 00:12:45.696 ' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.696 --rc genhtml_branch_coverage=1 00:12:45.696 --rc genhtml_function_coverage=1 00:12:45.696 --rc genhtml_legend=1 00:12:45.696 --rc geninfo_all_blocks=1 00:12:45.696 --rc geninfo_unexecuted_blocks=1 00:12:45.696 00:12:45.696 ' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.696 --rc genhtml_branch_coverage=1 00:12:45.696 --rc genhtml_function_coverage=1 00:12:45.696 --rc genhtml_legend=1 00:12:45.696 --rc geninfo_all_blocks=1 00:12:45.696 --rc geninfo_unexecuted_blocks=1 00:12:45.696 00:12:45.696 ' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.696 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.697 05:34:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:52.272 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:52.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:52.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:52.273 Found net devices under 0000:86:00.0: cvl_0_0 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:52.273 Found net devices under 0000:86:00.1: cvl_0_1 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.273 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:52.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:12:52.274 00:12:52.274 --- 10.0.0.2 ping statistics --- 00:12:52.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.274 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:12:52.274 00:12:52.274 --- 10.0.0.1 ping statistics --- 00:12:52.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.274 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1691827 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1691827 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1691827 ']' 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.274 05:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.274 [2024-11-27 05:34:39.458556] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:12:52.274 [2024-11-27 05:34:39.458608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.274 [2024-11-27 05:34:39.539655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.274 [2024-11-27 05:34:39.582134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.274 [2024-11-27 05:34:39.582173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.274 [2024-11-27 05:34:39.582183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.274 [2024-11-27 05:34:39.582190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.274 [2024-11-27 05:34:39.582196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.274 [2024-11-27 05:34:39.583721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.274 [2024-11-27 05:34:39.583827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.274 [2024-11-27 05:34:39.583937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.274 [2024-11-27 05:34:39.583938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.534 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.534 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:52.534 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:52.534 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:52.534 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.534 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:52.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:52.535 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:52.535 "nvmf_tgt_1" 00:12:52.795 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:52.795 "nvmf_tgt_2" 00:12:52.795 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.795 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:52.795 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:52.795 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:53.053 true 00:12:53.053 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:53.053 true 00:12:53.053 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.053 05:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:53.312 rmmod nvme_tcp 00:12:53.312 rmmod nvme_fabrics 00:12:53.312 rmmod nvme_keyring 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1691827 ']' 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1691827 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1691827 ']' 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1691827 00:12:53.312 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:53.313 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.313 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1691827 00:12:53.313 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:53.313 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:53.313 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1691827' 00:12:53.313 killing process with pid 1691827 00:12:53.313 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1691827 00:12:53.313 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1691827 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.572 05:34:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.480 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.480 00:12:55.480 real 0m10.250s 00:12:55.480 user 0m9.802s 00:12:55.480 sys 0m5.048s 00:12:55.480 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:55.480 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:55.480 ************************************ 00:12:55.480 END TEST nvmf_multitarget 00:12:55.480 ************************************ 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.740 ************************************ 00:12:55.740 START TEST nvmf_rpc 00:12:55.740 ************************************ 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:55.740 * Looking for test storage... 00:12:55.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.740 --rc genhtml_branch_coverage=1 00:12:55.740 --rc genhtml_function_coverage=1 00:12:55.740 --rc genhtml_legend=1 00:12:55.740 --rc geninfo_all_blocks=1 00:12:55.740 --rc geninfo_unexecuted_blocks=1 00:12:55.740 00:12:55.740 ' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.740 --rc genhtml_branch_coverage=1 00:12:55.740 --rc genhtml_function_coverage=1 00:12:55.740 --rc genhtml_legend=1 00:12:55.740 --rc geninfo_all_blocks=1 00:12:55.740 --rc geninfo_unexecuted_blocks=1 00:12:55.740 00:12:55.740 ' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.740 --rc genhtml_branch_coverage=1 00:12:55.740 --rc genhtml_function_coverage=1 00:12:55.740 --rc genhtml_legend=1 00:12:55.740 --rc geninfo_all_blocks=1 00:12:55.740 --rc geninfo_unexecuted_blocks=1 00:12:55.740 00:12:55.740 ' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.740 --rc genhtml_branch_coverage=1 00:12:55.740 --rc genhtml_function_coverage=1 00:12:55.740 --rc genhtml_legend=1 00:12:55.740 --rc geninfo_all_blocks=1 00:12:55.740 --rc geninfo_unexecuted_blocks=1 00:12:55.740 00:12:55.740 ' 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.740 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.741 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:56.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:56.001 05:34:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:02.579 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:02.579 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.579 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:02.580 Found net devices under 0000:86:00.0: cvl_0_0 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:02.580 Found net devices under 0000:86:00.1: cvl_0_1 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:02.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:13:02.580 00:13:02.580 --- 10.0.0.2 ping statistics --- 00:13:02.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.580 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:13:02.580 00:13:02.580 --- 10.0.0.1 ping statistics --- 00:13:02.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.580 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1695803 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1695803 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1695803 ']' 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.580 05:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.580 [2024-11-27 05:34:49.805549] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:02.580 [2024-11-27 05:34:49.805597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.580 [2024-11-27 05:34:49.885786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.580 [2024-11-27 05:34:49.927661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.580 [2024-11-27 05:34:49.927702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.580 [2024-11-27 05:34:49.927712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.580 [2024-11-27 05:34:49.927719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.580 [2024-11-27 05:34:49.927726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.580 [2024-11-27 05:34:49.929271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.580 [2024-11-27 05:34:49.929379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.580 [2024-11-27 05:34:49.929491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.580 [2024-11-27 05:34:49.929491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.839 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:02.839 "tick_rate": 2100000000, 00:13:02.839 "poll_groups": [ 00:13:02.839 { 00:13:02.839 "name": "nvmf_tgt_poll_group_000", 00:13:02.839 "admin_qpairs": 0, 00:13:02.839 "io_qpairs": 0, 00:13:02.839 "current_admin_qpairs": 0, 00:13:02.839 "current_io_qpairs": 0, 00:13:02.839 "pending_bdev_io": 0, 00:13:02.839 "completed_nvme_io": 0, 00:13:02.839 "transports": [] 00:13:02.839 }, 00:13:02.839 { 00:13:02.839 "name": "nvmf_tgt_poll_group_001", 00:13:02.839 "admin_qpairs": 0, 00:13:02.839 "io_qpairs": 0, 00:13:02.839 "current_admin_qpairs": 0, 00:13:02.839 "current_io_qpairs": 0, 00:13:02.839 "pending_bdev_io": 0, 00:13:02.839 "completed_nvme_io": 0, 00:13:02.839 "transports": [] 00:13:02.839 }, 00:13:02.839 { 00:13:02.839 "name": "nvmf_tgt_poll_group_002", 00:13:02.839 "admin_qpairs": 0, 00:13:02.839 "io_qpairs": 0, 00:13:02.840 "current_admin_qpairs": 0, 00:13:02.840 "current_io_qpairs": 0, 00:13:02.840 "pending_bdev_io": 0, 00:13:02.840 "completed_nvme_io": 0, 00:13:02.840 "transports": [] 00:13:02.840 }, 00:13:02.840 { 00:13:02.840 "name": "nvmf_tgt_poll_group_003", 00:13:02.840 "admin_qpairs": 0, 00:13:02.840 "io_qpairs": 0, 00:13:02.840 "current_admin_qpairs": 0, 00:13:02.840 "current_io_qpairs": 0, 00:13:02.840 "pending_bdev_io": 0, 00:13:02.840 "completed_nvme_io": 0, 00:13:02.840 "transports": [] 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 }' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.840 [2024-11-27 05:34:50.790876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:02.840 "tick_rate": 2100000000, 00:13:02.840 "poll_groups": [ 00:13:02.840 { 00:13:02.840 "name": "nvmf_tgt_poll_group_000", 00:13:02.840 "admin_qpairs": 0, 00:13:02.840 "io_qpairs": 0, 00:13:02.840 "current_admin_qpairs": 0, 00:13:02.840 "current_io_qpairs": 0, 00:13:02.840 "pending_bdev_io": 0, 00:13:02.840 "completed_nvme_io": 0, 00:13:02.840 "transports": [ 00:13:02.840 { 00:13:02.840 "trtype": "TCP" 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 }, 00:13:02.840 { 00:13:02.840 "name": "nvmf_tgt_poll_group_001", 00:13:02.840 "admin_qpairs": 0, 00:13:02.840 "io_qpairs": 0, 00:13:02.840 "current_admin_qpairs": 0, 00:13:02.840 "current_io_qpairs": 0, 00:13:02.840 "pending_bdev_io": 0, 00:13:02.840 "completed_nvme_io": 0, 00:13:02.840 "transports": [ 00:13:02.840 { 00:13:02.840 "trtype": "TCP" 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 }, 00:13:02.840 { 00:13:02.840 "name": "nvmf_tgt_poll_group_002", 00:13:02.840 "admin_qpairs": 0, 00:13:02.840 "io_qpairs": 0, 00:13:02.840 "current_admin_qpairs": 0, 00:13:02.840 "current_io_qpairs": 0, 00:13:02.840 "pending_bdev_io": 0, 00:13:02.840 "completed_nvme_io": 0, 00:13:02.840 "transports": [ 00:13:02.840 { 00:13:02.840 "trtype": "TCP" 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 }, 00:13:02.840 { 00:13:02.840 "name": "nvmf_tgt_poll_group_003", 00:13:02.840 "admin_qpairs": 0, 00:13:02.840 "io_qpairs": 0, 00:13:02.840 "current_admin_qpairs": 0, 00:13:02.840 "current_io_qpairs": 0, 00:13:02.840 "pending_bdev_io": 0, 00:13:02.840 "completed_nvme_io": 0, 00:13:02.840 "transports": [ 00:13:02.840 { 00:13:02.840 "trtype": "TCP" 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 } 00:13:02.840 ] 00:13:02.840 }' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:02.840 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.100 Malloc1 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.100 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 [2024-11-27 05:34:50.966817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:03.101 05:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:03.101 [2024-11-27 05:34:50.995367] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:03.101 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.101 could not add new controller: failed to write to nvme-fabrics device 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.101 05:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.475 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.475 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.475 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.475 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:04.475 05:34:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.375 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.376 [2024-11-27 05:34:54.311951] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:13:06.376 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:06.376 could not add new controller: failed to write to nvme-fabrics device 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.376 05:34:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.753 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.753 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:07.753 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.753 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:07.753 05:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.657 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.916 [2024-11-27 05:34:57.684641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.916 05:34:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.854 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.854 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:10.854 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.854 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:10.854 05:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.388 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.388 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.388 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.388 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.388 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.388 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:13.388 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.389 [2024-11-27 05:35:00.990690] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.389 05:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.389 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.389 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.389 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.389 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.389 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.389 05:35:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.329 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.329 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.329 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.329 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:14.329 05:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:16.235 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 [2024-11-27 05:35:04.294292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.496 05:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.875 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.875 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.875 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.875 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:17.875 05:35:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:19.783 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.784 [2024-11-27 05:35:07.703145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.784 05:35:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.164 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.164 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:21.164 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.164 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:21.164 05:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 [2024-11-27 05:35:10.952947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.070 05:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.478 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.478 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:24.478 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.478 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:24.478 05:35:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:26.385 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 [2024-11-27 05:35:14.308902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 [2024-11-27 05:35:14.356981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.386 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.646 [2024-11-27 05:35:14.405101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.646 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 [2024-11-27 05:35:14.453267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 [2024-11-27 05:35:14.501435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:26.647 "tick_rate": 2100000000, 00:13:26.647 "poll_groups": [ 00:13:26.647 { 00:13:26.647 "name": "nvmf_tgt_poll_group_000", 00:13:26.647 "admin_qpairs": 2, 00:13:26.647 "io_qpairs": 168, 00:13:26.647 "current_admin_qpairs": 0, 00:13:26.647 "current_io_qpairs": 0, 00:13:26.647 "pending_bdev_io": 0, 00:13:26.647 "completed_nvme_io": 269, 00:13:26.647 "transports": [ 00:13:26.647 { 00:13:26.647 "trtype": "TCP" 00:13:26.647 } 00:13:26.647 ] 00:13:26.647 }, 00:13:26.647 { 00:13:26.647 "name": "nvmf_tgt_poll_group_001", 00:13:26.647 "admin_qpairs": 2, 00:13:26.647 "io_qpairs": 168, 00:13:26.647 "current_admin_qpairs": 0, 00:13:26.647 "current_io_qpairs": 0, 00:13:26.647 "pending_bdev_io": 0, 00:13:26.647 "completed_nvme_io": 316, 00:13:26.647 "transports": [ 00:13:26.647 { 00:13:26.647 "trtype": "TCP" 00:13:26.647 } 00:13:26.647 ] 00:13:26.647 }, 00:13:26.647 { 00:13:26.647 "name": "nvmf_tgt_poll_group_002", 00:13:26.647 "admin_qpairs": 1, 00:13:26.647 "io_qpairs": 168, 00:13:26.647 "current_admin_qpairs": 0, 00:13:26.647 "current_io_qpairs": 0, 00:13:26.647 "pending_bdev_io": 0, 00:13:26.647 "completed_nvme_io": 218, 00:13:26.647 "transports": [ 00:13:26.647 { 00:13:26.647 "trtype": "TCP" 00:13:26.647 } 00:13:26.647 ] 00:13:26.647 }, 00:13:26.647 { 00:13:26.647 "name": "nvmf_tgt_poll_group_003", 00:13:26.647 "admin_qpairs": 2, 00:13:26.647 "io_qpairs": 168, 00:13:26.647 "current_admin_qpairs": 0, 00:13:26.647 "current_io_qpairs": 0, 00:13:26.647 "pending_bdev_io": 0, 00:13:26.647 "completed_nvme_io": 219, 00:13:26.647 "transports": [ 00:13:26.647 { 00:13:26.647 "trtype": "TCP" 00:13:26.647 } 00:13:26.647 ] 00:13:26.647 } 00:13:26.647 ] 00:13:26.647 }' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:26.647 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.907 rmmod nvme_tcp 00:13:26.907 rmmod nvme_fabrics 00:13:26.907 rmmod nvme_keyring 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1695803 ']' 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1695803 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1695803 ']' 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1695803 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1695803 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1695803' 00:13:26.907 killing process with pid 1695803 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1695803 00:13:26.907 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1695803 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.167 05:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.074 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.074 00:13:29.074 real 0m33.499s 00:13:29.074 user 1m41.581s 00:13:29.074 sys 0m6.605s 00:13:29.074 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.074 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.074 ************************************ 00:13:29.074 END TEST nvmf_rpc 00:13:29.074 ************************************ 00:13:29.074 05:35:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:29.074 05:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.074 05:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.334 ************************************ 00:13:29.334 START TEST nvmf_invalid 00:13:29.334 ************************************ 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:29.334 * Looking for test storage... 00:13:29.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:29.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.334 --rc genhtml_branch_coverage=1 00:13:29.334 --rc genhtml_function_coverage=1 00:13:29.334 --rc genhtml_legend=1 00:13:29.334 --rc geninfo_all_blocks=1 00:13:29.334 --rc geninfo_unexecuted_blocks=1 00:13:29.334 00:13:29.334 ' 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:29.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.334 --rc genhtml_branch_coverage=1 00:13:29.334 --rc genhtml_function_coverage=1 00:13:29.334 --rc genhtml_legend=1 00:13:29.334 --rc geninfo_all_blocks=1 00:13:29.334 --rc geninfo_unexecuted_blocks=1 00:13:29.334 00:13:29.334 ' 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:29.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.334 --rc genhtml_branch_coverage=1 00:13:29.334 --rc genhtml_function_coverage=1 00:13:29.334 --rc genhtml_legend=1 00:13:29.334 --rc geninfo_all_blocks=1 00:13:29.334 --rc geninfo_unexecuted_blocks=1 00:13:29.334 00:13:29.334 ' 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:29.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.334 --rc genhtml_branch_coverage=1 00:13:29.334 --rc genhtml_function_coverage=1 00:13:29.334 --rc genhtml_legend=1 00:13:29.334 --rc geninfo_all_blocks=1 00:13:29.334 --rc geninfo_unexecuted_blocks=1 00:13:29.334 00:13:29.334 ' 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.334 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.335 05:35:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.916 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:35.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:35.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:35.917 Found net devices under 0000:86:00.0: cvl_0_0 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:35.917 Found net devices under 0000:86:00.1: cvl_0_1 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.917 05:35:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:13:35.917 00:13:35.917 --- 10.0.0.2 ping statistics --- 00:13:35.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.917 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:13:35.917 00:13:35.917 --- 10.0.0.1 ping statistics --- 00:13:35.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.917 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1703544 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.917 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1703544 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1703544 ']' 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.918 [2024-11-27 05:35:23.306442] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:35.918 [2024-11-27 05:35:23.306493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.918 [2024-11-27 05:35:23.387324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.918 [2024-11-27 05:35:23.427719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.918 [2024-11-27 05:35:23.427759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.918 [2024-11-27 05:35:23.427768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.918 [2024-11-27 05:35:23.427776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.918 [2024-11-27 05:35:23.427784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.918 [2024-11-27 05:35:23.429256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.918 [2024-11-27 05:35:23.429367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.918 [2024-11-27 05:35:23.429473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.918 [2024-11-27 05:35:23.429474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8605 00:13:35.918 [2024-11-27 05:35:23.739361] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:35.918 { 00:13:35.918 "nqn": "nqn.2016-06.io.spdk:cnode8605", 00:13:35.918 "tgt_name": "foobar", 00:13:35.918 "method": "nvmf_create_subsystem", 00:13:35.918 "req_id": 1 00:13:35.918 } 00:13:35.918 Got JSON-RPC error response 00:13:35.918 response: 00:13:35.918 { 00:13:35.918 "code": -32603, 00:13:35.918 "message": "Unable to find target foobar" 00:13:35.918 }' 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:35.918 { 00:13:35.918 "nqn": "nqn.2016-06.io.spdk:cnode8605", 00:13:35.918 "tgt_name": "foobar", 00:13:35.918 "method": "nvmf_create_subsystem", 00:13:35.918 "req_id": 1 00:13:35.918 } 00:13:35.918 Got JSON-RPC error response 00:13:35.918 response: 00:13:35.918 { 00:13:35.918 "code": -32603, 00:13:35.918 "message": "Unable to find target foobar" 00:13:35.918 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:35.918 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4140 00:13:36.178 [2024-11-27 05:35:23.944095] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4140: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:36.178 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:36.178 { 00:13:36.178 "nqn": "nqn.2016-06.io.spdk:cnode4140", 00:13:36.178 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:36.178 "method": "nvmf_create_subsystem", 00:13:36.178 "req_id": 1 00:13:36.178 } 00:13:36.178 Got JSON-RPC error response 00:13:36.178 response: 00:13:36.178 { 00:13:36.178 "code": -32602, 00:13:36.178 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:36.178 }' 00:13:36.178 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:36.178 { 00:13:36.178 "nqn": "nqn.2016-06.io.spdk:cnode4140", 00:13:36.178 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:36.178 "method": "nvmf_create_subsystem", 00:13:36.178 "req_id": 1 00:13:36.178 } 00:13:36.178 Got JSON-RPC error response 00:13:36.178 response: 00:13:36.178 { 00:13:36.178 "code": -32602, 00:13:36.178 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:36.178 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:36.178 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:36.178 05:35:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10362 00:13:36.178 [2024-11-27 05:35:24.136756] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10362: invalid model number 'SPDK_Controller' 00:13:36.178 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:36.178 { 00:13:36.178 "nqn": "nqn.2016-06.io.spdk:cnode10362", 00:13:36.178 "model_number": "SPDK_Controller\u001f", 00:13:36.178 "method": "nvmf_create_subsystem", 00:13:36.179 "req_id": 1 00:13:36.179 } 00:13:36.179 Got JSON-RPC error response 00:13:36.179 response: 00:13:36.179 { 00:13:36.179 "code": -32602, 00:13:36.179 "message": "Invalid MN SPDK_Controller\u001f" 00:13:36.179 }' 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:36.179 { 00:13:36.179 "nqn": "nqn.2016-06.io.spdk:cnode10362", 00:13:36.179 "model_number": "SPDK_Controller\u001f", 00:13:36.179 "method": "nvmf_create_subsystem", 00:13:36.179 "req_id": 1 00:13:36.179 } 00:13:36.179 Got JSON-RPC error response 00:13:36.179 response: 00:13:36.179 { 00:13:36.179 "code": -32602, 00:13:36.179 "message": "Invalid MN SPDK_Controller\u001f" 00:13:36.179 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.179 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:36.440 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '{Cs;49I)]gGyO!`>-!J2r' 00:13:36.441 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '{Cs;49I)]gGyO!`>-!J2r' nqn.2016-06.io.spdk:cnode21180 00:13:36.702 [2024-11-27 05:35:24.481913] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21180: invalid serial number '{Cs;49I)]gGyO!`>-!J2r' 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:36.702 { 00:13:36.702 "nqn": "nqn.2016-06.io.spdk:cnode21180", 00:13:36.702 "serial_number": "{Cs;49I)]gGyO!`>-!J2r", 00:13:36.702 "method": "nvmf_create_subsystem", 00:13:36.702 "req_id": 1 00:13:36.702 } 00:13:36.702 Got JSON-RPC error response 00:13:36.702 response: 00:13:36.702 { 00:13:36.702 "code": -32602, 00:13:36.702 "message": "Invalid SN {Cs;49I)]gGyO!`>-!J2r" 00:13:36.702 }' 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:36.702 { 00:13:36.702 "nqn": "nqn.2016-06.io.spdk:cnode21180", 00:13:36.702 "serial_number": "{Cs;49I)]gGyO!`>-!J2r", 00:13:36.702 "method": "nvmf_create_subsystem", 00:13:36.702 "req_id": 1 00:13:36.702 } 00:13:36.702 Got JSON-RPC error response 00:13:36.702 response: 00:13:36.702 { 00:13:36.702 "code": -32602, 00:13:36.702 "message": "Invalid SN {Cs;49I)]gGyO!`>-!J2r" 00:13:36.702 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:36.702 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.703 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:36.704 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:13:36.963 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wnU~>>c:I a.,L5SOc9dzvw:'\''X_Z~Lk6;JXx)0' 00:13:36.964 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'wnU~>>c:I a.,L5SOc9dzvw:'\''X_Z~Lk6;JXx)0' nqn.2016-06.io.spdk:cnode14876 00:13:36.964 [2024-11-27 05:35:24.947428] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14876: invalid model number 'wnU~>>c:I a.,L5SOc9dzvw:'X_Z~Lk6;JXx)0' 00:13:37.223 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:37.223 { 00:13:37.223 "nqn": "nqn.2016-06.io.spdk:cnode14876", 00:13:37.223 "model_number": "wnU~>>c:I a.,L5SOc9dz\u007fvw:\u007f'\''X\u007f_Z~Lk6;JXx)0", 00:13:37.223 "method": "nvmf_create_subsystem", 00:13:37.223 "req_id": 1 00:13:37.223 } 00:13:37.223 Got JSON-RPC error response 00:13:37.223 response: 00:13:37.223 { 00:13:37.223 "code": -32602, 00:13:37.223 "message": "Invalid MN wnU~>>c:I a.,L5SOc9dz\u007fvw:\u007f'\''X\u007f_Z~Lk6;JXx)0" 00:13:37.223 }' 00:13:37.223 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:37.223 { 00:13:37.223 "nqn": "nqn.2016-06.io.spdk:cnode14876", 00:13:37.223 "model_number": "wnU~>>c:I a.,L5SOc9dz\u007fvw:\u007f'X\u007f_Z~Lk6;JXx)0", 00:13:37.223 "method": "nvmf_create_subsystem", 00:13:37.223 "req_id": 1 00:13:37.223 } 00:13:37.223 Got JSON-RPC error response 00:13:37.223 response: 00:13:37.223 { 00:13:37.223 "code": -32602, 00:13:37.223 "message": "Invalid MN wnU~>>c:I a.,L5SOc9dz\u007fvw:\u007f'X\u007f_Z~Lk6;JXx)0" 00:13:37.223 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:37.223 05:35:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:37.223 [2024-11-27 05:35:25.148159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.223 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:37.482 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:37.482 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:37.482 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:37.482 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:37.482 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:37.742 [2024-11-27 05:35:25.542718] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:37.742 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:37.742 { 00:13:37.742 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:37.742 "listen_address": { 00:13:37.742 "trtype": "tcp", 00:13:37.742 "traddr": "", 00:13:37.742 "trsvcid": "4421" 00:13:37.742 }, 00:13:37.742 "method": "nvmf_subsystem_remove_listener", 00:13:37.742 "req_id": 1 00:13:37.742 } 00:13:37.742 Got JSON-RPC error response 00:13:37.742 response: 00:13:37.742 { 00:13:37.742 "code": -32602, 00:13:37.742 "message": "Invalid parameters" 00:13:37.742 }' 00:13:37.742 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:37.742 { 00:13:37.742 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:37.742 "listen_address": { 00:13:37.742 "trtype": "tcp", 00:13:37.742 "traddr": "", 00:13:37.742 "trsvcid": "4421" 00:13:37.742 }, 00:13:37.742 "method": "nvmf_subsystem_remove_listener", 00:13:37.742 "req_id": 1 00:13:37.742 } 00:13:37.742 Got JSON-RPC error response 00:13:37.742 response: 00:13:37.742 { 00:13:37.742 "code": -32602, 00:13:37.742 "message": "Invalid parameters" 00:13:37.742 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:37.742 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7831 -i 0 00:13:37.742 [2024-11-27 05:35:25.739307] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7831: invalid cntlid range [0-65519] 00:13:38.001 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:38.001 { 00:13:38.001 "nqn": "nqn.2016-06.io.spdk:cnode7831", 00:13:38.001 "min_cntlid": 0, 00:13:38.001 "method": "nvmf_create_subsystem", 00:13:38.001 "req_id": 1 00:13:38.001 } 00:13:38.001 Got JSON-RPC error response 00:13:38.001 response: 00:13:38.001 { 00:13:38.001 "code": -32602, 00:13:38.001 "message": "Invalid cntlid range [0-65519]" 00:13:38.001 }' 00:13:38.001 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:38.001 { 00:13:38.001 "nqn": "nqn.2016-06.io.spdk:cnode7831", 00:13:38.001 "min_cntlid": 0, 00:13:38.001 "method": "nvmf_create_subsystem", 00:13:38.001 "req_id": 1 00:13:38.001 } 00:13:38.001 Got JSON-RPC error response 00:13:38.001 response: 00:13:38.001 { 00:13:38.001 "code": -32602, 00:13:38.001 "message": "Invalid cntlid range [0-65519]" 00:13:38.001 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.001 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16317 -i 65520 00:13:38.001 [2024-11-27 05:35:25.948020] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16317: invalid cntlid range [65520-65519] 00:13:38.001 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:38.001 { 00:13:38.001 "nqn": "nqn.2016-06.io.spdk:cnode16317", 00:13:38.001 "min_cntlid": 65520, 00:13:38.001 "method": "nvmf_create_subsystem", 00:13:38.001 "req_id": 1 00:13:38.001 } 00:13:38.001 Got JSON-RPC error response 00:13:38.001 response: 00:13:38.001 { 00:13:38.001 "code": -32602, 00:13:38.001 "message": "Invalid cntlid range [65520-65519]" 00:13:38.001 }' 00:13:38.001 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:38.001 { 00:13:38.001 "nqn": "nqn.2016-06.io.spdk:cnode16317", 00:13:38.001 "min_cntlid": 65520, 00:13:38.001 "method": "nvmf_create_subsystem", 00:13:38.001 "req_id": 1 00:13:38.001 } 00:13:38.001 Got JSON-RPC error response 00:13:38.001 response: 00:13:38.001 { 00:13:38.001 "code": -32602, 00:13:38.001 "message": "Invalid cntlid range [65520-65519]" 00:13:38.001 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.001 05:35:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14446 -I 0 00:13:38.260 [2024-11-27 05:35:26.168753] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14446: invalid cntlid range [1-0] 00:13:38.260 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:38.260 { 00:13:38.260 "nqn": "nqn.2016-06.io.spdk:cnode14446", 00:13:38.260 "max_cntlid": 0, 00:13:38.260 "method": "nvmf_create_subsystem", 00:13:38.260 "req_id": 1 00:13:38.260 } 00:13:38.260 Got JSON-RPC error response 00:13:38.260 response: 00:13:38.260 { 00:13:38.260 "code": -32602, 00:13:38.260 "message": "Invalid cntlid range [1-0]" 00:13:38.260 }' 00:13:38.260 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:38.260 { 00:13:38.260 "nqn": "nqn.2016-06.io.spdk:cnode14446", 00:13:38.260 "max_cntlid": 0, 00:13:38.260 "method": "nvmf_create_subsystem", 00:13:38.260 "req_id": 1 00:13:38.260 } 00:13:38.260 Got JSON-RPC error response 00:13:38.260 response: 00:13:38.260 { 00:13:38.260 "code": -32602, 00:13:38.260 "message": "Invalid cntlid range [1-0]" 00:13:38.260 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.260 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4807 -I 65520 00:13:38.520 [2024-11-27 05:35:26.369437] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4807: invalid cntlid range [1-65520] 00:13:38.520 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:38.520 { 00:13:38.520 "nqn": "nqn.2016-06.io.spdk:cnode4807", 00:13:38.520 "max_cntlid": 65520, 00:13:38.520 "method": "nvmf_create_subsystem", 00:13:38.520 "req_id": 1 00:13:38.520 } 00:13:38.520 Got JSON-RPC error response 00:13:38.520 response: 00:13:38.520 { 00:13:38.520 "code": -32602, 00:13:38.520 "message": "Invalid cntlid range [1-65520]" 00:13:38.520 }' 00:13:38.520 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:38.520 { 00:13:38.520 "nqn": "nqn.2016-06.io.spdk:cnode4807", 00:13:38.520 "max_cntlid": 65520, 00:13:38.520 "method": "nvmf_create_subsystem", 00:13:38.520 "req_id": 1 00:13:38.520 } 00:13:38.520 Got JSON-RPC error response 00:13:38.520 response: 00:13:38.520 { 00:13:38.520 "code": -32602, 00:13:38.520 "message": "Invalid cntlid range [1-65520]" 00:13:38.520 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.520 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6397 -i 6 -I 5 00:13:38.779 [2024-11-27 05:35:26.574163] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6397: invalid cntlid range [6-5] 00:13:38.779 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:38.779 { 00:13:38.779 "nqn": "nqn.2016-06.io.spdk:cnode6397", 00:13:38.779 "min_cntlid": 6, 00:13:38.779 "max_cntlid": 5, 00:13:38.779 "method": "nvmf_create_subsystem", 00:13:38.779 "req_id": 1 00:13:38.779 } 00:13:38.779 Got JSON-RPC error response 00:13:38.779 response: 00:13:38.779 { 00:13:38.779 "code": -32602, 00:13:38.779 "message": "Invalid cntlid range [6-5]" 00:13:38.779 }' 00:13:38.779 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:38.779 { 00:13:38.779 "nqn": "nqn.2016-06.io.spdk:cnode6397", 00:13:38.779 "min_cntlid": 6, 00:13:38.779 "max_cntlid": 5, 00:13:38.779 "method": "nvmf_create_subsystem", 00:13:38.779 "req_id": 1 00:13:38.779 } 00:13:38.779 Got JSON-RPC error response 00:13:38.779 response: 00:13:38.779 { 00:13:38.779 "code": -32602, 00:13:38.779 "message": "Invalid cntlid range [6-5]" 00:13:38.779 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:38.779 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:38.779 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:38.779 { 00:13:38.779 "name": "foobar", 00:13:38.779 "method": "nvmf_delete_target", 00:13:38.779 "req_id": 1 00:13:38.779 } 00:13:38.779 Got JSON-RPC error response 00:13:38.779 response: 00:13:38.779 { 00:13:38.779 "code": -32602, 00:13:38.779 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:38.779 }' 00:13:38.779 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:38.779 { 00:13:38.779 "name": "foobar", 00:13:38.779 "method": "nvmf_delete_target", 00:13:38.780 "req_id": 1 00:13:38.780 } 00:13:38.780 Got JSON-RPC error response 00:13:38.780 response: 00:13:38.780 { 00:13:38.780 "code": -32602, 00:13:38.780 "message": "The specified target doesn't exist, cannot delete it." 00:13:38.780 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.780 rmmod nvme_tcp 00:13:38.780 rmmod nvme_fabrics 00:13:38.780 rmmod nvme_keyring 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1703544 ']' 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1703544 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1703544 ']' 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1703544 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.780 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1703544 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1703544' 00:13:39.039 killing process with pid 1703544 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1703544 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1703544 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.039 05:35:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:41.577 00:13:41.577 real 0m11.936s 00:13:41.577 user 0m18.362s 00:13:41.577 sys 0m5.389s 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:41.577 ************************************ 00:13:41.577 END TEST nvmf_invalid 00:13:41.577 ************************************ 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:41.577 ************************************ 00:13:41.577 START TEST nvmf_connect_stress 00:13:41.577 ************************************ 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:41.577 * Looking for test storage... 00:13:41.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:41.577 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.578 --rc genhtml_branch_coverage=1 00:13:41.578 --rc genhtml_function_coverage=1 00:13:41.578 --rc genhtml_legend=1 00:13:41.578 --rc geninfo_all_blocks=1 00:13:41.578 --rc geninfo_unexecuted_blocks=1 00:13:41.578 00:13:41.578 ' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.578 --rc genhtml_branch_coverage=1 00:13:41.578 --rc genhtml_function_coverage=1 00:13:41.578 --rc genhtml_legend=1 00:13:41.578 --rc geninfo_all_blocks=1 00:13:41.578 --rc geninfo_unexecuted_blocks=1 00:13:41.578 00:13:41.578 ' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.578 --rc genhtml_branch_coverage=1 00:13:41.578 --rc genhtml_function_coverage=1 00:13:41.578 --rc genhtml_legend=1 00:13:41.578 --rc geninfo_all_blocks=1 00:13:41.578 --rc geninfo_unexecuted_blocks=1 00:13:41.578 00:13:41.578 ' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:41.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.578 --rc genhtml_branch_coverage=1 00:13:41.578 --rc genhtml_function_coverage=1 00:13:41.578 --rc genhtml_legend=1 00:13:41.578 --rc geninfo_all_blocks=1 00:13:41.578 --rc geninfo_unexecuted_blocks=1 00:13:41.578 00:13:41.578 ' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:41.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.578 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:41.579 05:35:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.216 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:48.217 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:48.217 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:48.217 Found net devices under 0000:86:00.0: cvl_0_0 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:48.217 Found net devices under 0000:86:00.1: cvl_0_1 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:48.217 05:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:48.217 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:48.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:13:48.218 00:13:48.218 --- 10.0.0.2 ping statistics --- 00:13:48.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.218 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:13:48.218 00:13:48.218 --- 10.0.0.1 ping statistics --- 00:13:48.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.218 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1707797 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1707797 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1707797 ']' 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.218 [2024-11-27 05:35:35.349073] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:13:48.218 [2024-11-27 05:35:35.349128] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.218 [2024-11-27 05:35:35.426797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.218 [2024-11-27 05:35:35.469502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.218 [2024-11-27 05:35:35.469538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.218 [2024-11-27 05:35:35.469545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.218 [2024-11-27 05:35:35.469551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.218 [2024-11-27 05:35:35.469556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.218 [2024-11-27 05:35:35.470983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.218 [2024-11-27 05:35:35.471093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.218 [2024-11-27 05:35:35.471093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.218 [2024-11-27 05:35:35.612439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.218 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.219 [2024-11-27 05:35:35.636689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.219 NULL1 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1707825 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.219 05:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.219 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.219 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:48.219 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.219 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.219 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.507 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.507 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:48.507 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.507 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.507 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.781 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.781 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:48.781 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.781 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.781 05:35:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.102 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.102 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:49.102 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.102 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.102 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.379 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.379 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:49.379 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.379 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.379 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.946 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.946 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:49.946 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.946 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.946 05:35:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.205 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.205 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:50.205 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.205 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.205 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.465 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.465 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:50.465 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.465 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.465 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.724 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.724 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:50.724 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.724 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.724 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.290 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.290 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:51.290 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.290 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.290 05:35:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.548 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.548 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:51.548 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.548 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.548 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.807 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.807 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:51.807 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.807 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.807 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.065 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.065 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:52.065 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.065 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.065 05:35:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.324 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.324 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:52.324 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.324 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.324 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.892 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.892 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:52.892 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.892 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.892 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.152 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.152 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:53.152 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.152 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.152 05:35:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.412 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.412 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:53.412 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.412 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.412 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.671 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.671 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:53.671 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.671 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.671 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.930 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.930 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:53.930 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.930 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.930 05:35:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.498 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.498 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:54.498 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.498 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.498 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.758 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.758 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:54.758 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.758 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.758 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.017 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.017 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:55.017 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.017 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.017 05:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.275 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.275 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:55.275 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.275 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.275 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.844 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.844 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:55.844 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.844 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.844 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.103 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.103 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:56.103 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.103 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.103 05:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.362 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.362 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:56.362 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.362 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.362 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.621 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.621 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:56.621 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.621 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.621 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.880 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.880 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:56.880 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.880 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.880 05:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.448 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.448 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:57.448 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.448 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.448 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.706 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.706 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:57.706 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.707 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.707 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.965 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1707825 00:13:57.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1707825) - No such process 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1707825 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.965 rmmod nvme_tcp 00:13:57.965 rmmod nvme_fabrics 00:13:57.965 rmmod nvme_keyring 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1707797 ']' 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1707797 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1707797 ']' 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1707797 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707797 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707797' 00:13:57.965 killing process with pid 1707797 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1707797 00:13:57.965 05:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1707797 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.223 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.224 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.224 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.224 05:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.769 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:00.769 00:14:00.769 real 0m19.070s 00:14:00.769 user 0m39.315s 00:14:00.769 sys 0m8.681s 00:14:00.769 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.769 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.769 ************************************ 00:14:00.769 END TEST nvmf_connect_stress 00:14:00.769 ************************************ 00:14:00.769 05:35:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.770 ************************************ 00:14:00.770 START TEST nvmf_fused_ordering 00:14:00.770 ************************************ 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:00.770 * Looking for test storage... 00:14:00.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.770 --rc genhtml_branch_coverage=1 00:14:00.770 --rc genhtml_function_coverage=1 00:14:00.770 --rc genhtml_legend=1 00:14:00.770 --rc geninfo_all_blocks=1 00:14:00.770 --rc geninfo_unexecuted_blocks=1 00:14:00.770 00:14:00.770 ' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.770 --rc genhtml_branch_coverage=1 00:14:00.770 --rc genhtml_function_coverage=1 00:14:00.770 --rc genhtml_legend=1 00:14:00.770 --rc geninfo_all_blocks=1 00:14:00.770 --rc geninfo_unexecuted_blocks=1 00:14:00.770 00:14:00.770 ' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.770 --rc genhtml_branch_coverage=1 00:14:00.770 --rc genhtml_function_coverage=1 00:14:00.770 --rc genhtml_legend=1 00:14:00.770 --rc geninfo_all_blocks=1 00:14:00.770 --rc geninfo_unexecuted_blocks=1 00:14:00.770 00:14:00.770 ' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:00.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.770 --rc genhtml_branch_coverage=1 00:14:00.770 --rc genhtml_function_coverage=1 00:14:00.770 --rc genhtml_legend=1 00:14:00.770 --rc geninfo_all_blocks=1 00:14:00.770 --rc geninfo_unexecuted_blocks=1 00:14:00.770 00:14:00.770 ' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.770 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:00.771 05:35:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.348 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:07.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:07.349 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:07.349 Found net devices under 0000:86:00.0: cvl_0_0 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:07.349 Found net devices under 0000:86:00.1: cvl_0_1 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:07.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:14:07.349 00:14:07.349 --- 10.0.0.2 ping statistics --- 00:14:07.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.349 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:14:07.349 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:07.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:07.349 00:14:07.349 --- 10.0.0.1 ping statistics --- 00:14:07.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.350 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1713199 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1713199 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1713199 ']' 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 [2024-11-27 05:35:54.506701] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:07.350 [2024-11-27 05:35:54.506753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.350 [2024-11-27 05:35:54.587927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.350 [2024-11-27 05:35:54.628526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.350 [2024-11-27 05:35:54.628559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.350 [2024-11-27 05:35:54.628567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.350 [2024-11-27 05:35:54.628573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.350 [2024-11-27 05:35:54.628578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.350 [2024-11-27 05:35:54.629113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 [2024-11-27 05:35:54.764577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 [2024-11-27 05:35:54.784780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 NULL1 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.350 05:35:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:07.350 [2024-11-27 05:35:54.842725] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:07.350 [2024-11-27 05:35:54.842762] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713226 ] 00:14:07.350 Attached to nqn.2016-06.io.spdk:cnode1 00:14:07.350 Namespace ID: 1 size: 1GB 00:14:07.350 fused_ordering(0) 00:14:07.350 fused_ordering(1) 00:14:07.350 fused_ordering(2) 00:14:07.350 fused_ordering(3) 00:14:07.350 fused_ordering(4) 00:14:07.350 fused_ordering(5) 00:14:07.350 fused_ordering(6) 00:14:07.350 fused_ordering(7) 00:14:07.350 fused_ordering(8) 00:14:07.350 fused_ordering(9) 00:14:07.350 fused_ordering(10) 00:14:07.350 fused_ordering(11) 00:14:07.350 fused_ordering(12) 00:14:07.350 fused_ordering(13) 00:14:07.350 fused_ordering(14) 00:14:07.351 fused_ordering(15) 00:14:07.351 fused_ordering(16) 00:14:07.351 fused_ordering(17) 00:14:07.351 fused_ordering(18) 00:14:07.351 fused_ordering(19) 00:14:07.351 fused_ordering(20) 00:14:07.351 fused_ordering(21) 00:14:07.351 fused_ordering(22) 00:14:07.351 fused_ordering(23) 00:14:07.351 fused_ordering(24) 00:14:07.351 fused_ordering(25) 00:14:07.351 fused_ordering(26) 00:14:07.351 fused_ordering(27) 00:14:07.351 fused_ordering(28) 00:14:07.351 fused_ordering(29) 00:14:07.351 fused_ordering(30) 00:14:07.351 fused_ordering(31) 00:14:07.351 fused_ordering(32) 00:14:07.351 fused_ordering(33) 00:14:07.351 fused_ordering(34) 00:14:07.351 fused_ordering(35) 00:14:07.351 fused_ordering(36) 00:14:07.351 fused_ordering(37) 00:14:07.351 fused_ordering(38) 00:14:07.351 fused_ordering(39) 00:14:07.351 fused_ordering(40) 00:14:07.351 fused_ordering(41) 00:14:07.351 fused_ordering(42) 00:14:07.351 fused_ordering(43) 00:14:07.351 fused_ordering(44) 00:14:07.351 fused_ordering(45) 00:14:07.351 fused_ordering(46) 00:14:07.351 fused_ordering(47) 00:14:07.351 fused_ordering(48) 00:14:07.351 fused_ordering(49) 00:14:07.351 fused_ordering(50) 00:14:07.351 fused_ordering(51) 00:14:07.351 fused_ordering(52) 00:14:07.351 fused_ordering(53) 00:14:07.351 fused_ordering(54) 00:14:07.351 fused_ordering(55) 00:14:07.351 fused_ordering(56) 00:14:07.351 fused_ordering(57) 00:14:07.351 fused_ordering(58) 00:14:07.351 fused_ordering(59) 00:14:07.351 fused_ordering(60) 00:14:07.351 fused_ordering(61) 00:14:07.351 fused_ordering(62) 00:14:07.351 fused_ordering(63) 00:14:07.351 fused_ordering(64) 00:14:07.351 fused_ordering(65) 00:14:07.351 fused_ordering(66) 00:14:07.351 fused_ordering(67) 00:14:07.351 fused_ordering(68) 00:14:07.351 fused_ordering(69) 00:14:07.351 fused_ordering(70) 00:14:07.351 fused_ordering(71) 00:14:07.351 fused_ordering(72) 00:14:07.351 fused_ordering(73) 00:14:07.351 fused_ordering(74) 00:14:07.351 fused_ordering(75) 00:14:07.351 fused_ordering(76) 00:14:07.351 fused_ordering(77) 00:14:07.351 fused_ordering(78) 00:14:07.351 fused_ordering(79) 00:14:07.351 fused_ordering(80) 00:14:07.351 fused_ordering(81) 00:14:07.351 fused_ordering(82) 00:14:07.351 fused_ordering(83) 00:14:07.351 fused_ordering(84) 00:14:07.351 fused_ordering(85) 00:14:07.351 fused_ordering(86) 00:14:07.351 fused_ordering(87) 00:14:07.351 fused_ordering(88) 00:14:07.351 fused_ordering(89) 00:14:07.351 fused_ordering(90) 00:14:07.351 fused_ordering(91) 00:14:07.351 fused_ordering(92) 00:14:07.351 fused_ordering(93) 00:14:07.351 fused_ordering(94) 00:14:07.351 fused_ordering(95) 00:14:07.351 fused_ordering(96) 00:14:07.351 fused_ordering(97) 00:14:07.351 fused_ordering(98) 00:14:07.351 fused_ordering(99) 00:14:07.351 fused_ordering(100) 00:14:07.351 fused_ordering(101) 00:14:07.351 fused_ordering(102) 00:14:07.351 fused_ordering(103) 00:14:07.351 fused_ordering(104) 00:14:07.351 fused_ordering(105) 00:14:07.351 fused_ordering(106) 00:14:07.351 fused_ordering(107) 00:14:07.351 fused_ordering(108) 00:14:07.351 fused_ordering(109) 00:14:07.351 fused_ordering(110) 00:14:07.351 fused_ordering(111) 00:14:07.351 fused_ordering(112) 00:14:07.351 fused_ordering(113) 00:14:07.351 fused_ordering(114) 00:14:07.351 fused_ordering(115) 00:14:07.351 fused_ordering(116) 00:14:07.351 fused_ordering(117) 00:14:07.351 fused_ordering(118) 00:14:07.351 fused_ordering(119) 00:14:07.351 fused_ordering(120) 00:14:07.351 fused_ordering(121) 00:14:07.351 fused_ordering(122) 00:14:07.351 fused_ordering(123) 00:14:07.351 fused_ordering(124) 00:14:07.351 fused_ordering(125) 00:14:07.351 fused_ordering(126) 00:14:07.351 fused_ordering(127) 00:14:07.351 fused_ordering(128) 00:14:07.351 fused_ordering(129) 00:14:07.351 fused_ordering(130) 00:14:07.351 fused_ordering(131) 00:14:07.351 fused_ordering(132) 00:14:07.351 fused_ordering(133) 00:14:07.351 fused_ordering(134) 00:14:07.351 fused_ordering(135) 00:14:07.351 fused_ordering(136) 00:14:07.351 fused_ordering(137) 00:14:07.351 fused_ordering(138) 00:14:07.351 fused_ordering(139) 00:14:07.351 fused_ordering(140) 00:14:07.351 fused_ordering(141) 00:14:07.351 fused_ordering(142) 00:14:07.351 fused_ordering(143) 00:14:07.351 fused_ordering(144) 00:14:07.351 fused_ordering(145) 00:14:07.351 fused_ordering(146) 00:14:07.351 fused_ordering(147) 00:14:07.351 fused_ordering(148) 00:14:07.351 fused_ordering(149) 00:14:07.351 fused_ordering(150) 00:14:07.351 fused_ordering(151) 00:14:07.351 fused_ordering(152) 00:14:07.351 fused_ordering(153) 00:14:07.351 fused_ordering(154) 00:14:07.351 fused_ordering(155) 00:14:07.351 fused_ordering(156) 00:14:07.351 fused_ordering(157) 00:14:07.351 fused_ordering(158) 00:14:07.351 fused_ordering(159) 00:14:07.351 fused_ordering(160) 00:14:07.351 fused_ordering(161) 00:14:07.351 fused_ordering(162) 00:14:07.351 fused_ordering(163) 00:14:07.351 fused_ordering(164) 00:14:07.351 fused_ordering(165) 00:14:07.351 fused_ordering(166) 00:14:07.351 fused_ordering(167) 00:14:07.351 fused_ordering(168) 00:14:07.351 fused_ordering(169) 00:14:07.351 fused_ordering(170) 00:14:07.351 fused_ordering(171) 00:14:07.351 fused_ordering(172) 00:14:07.351 fused_ordering(173) 00:14:07.351 fused_ordering(174) 00:14:07.351 fused_ordering(175) 00:14:07.351 fused_ordering(176) 00:14:07.351 fused_ordering(177) 00:14:07.351 fused_ordering(178) 00:14:07.351 fused_ordering(179) 00:14:07.351 fused_ordering(180) 00:14:07.351 fused_ordering(181) 00:14:07.351 fused_ordering(182) 00:14:07.351 fused_ordering(183) 00:14:07.351 fused_ordering(184) 00:14:07.351 fused_ordering(185) 00:14:07.351 fused_ordering(186) 00:14:07.351 fused_ordering(187) 00:14:07.351 fused_ordering(188) 00:14:07.351 fused_ordering(189) 00:14:07.351 fused_ordering(190) 00:14:07.351 fused_ordering(191) 00:14:07.351 fused_ordering(192) 00:14:07.351 fused_ordering(193) 00:14:07.351 fused_ordering(194) 00:14:07.351 fused_ordering(195) 00:14:07.351 fused_ordering(196) 00:14:07.351 fused_ordering(197) 00:14:07.351 fused_ordering(198) 00:14:07.351 fused_ordering(199) 00:14:07.351 fused_ordering(200) 00:14:07.351 fused_ordering(201) 00:14:07.352 fused_ordering(202) 00:14:07.352 fused_ordering(203) 00:14:07.352 fused_ordering(204) 00:14:07.352 fused_ordering(205) 00:14:07.611 fused_ordering(206) 00:14:07.611 fused_ordering(207) 00:14:07.611 fused_ordering(208) 00:14:07.611 fused_ordering(209) 00:14:07.611 fused_ordering(210) 00:14:07.611 fused_ordering(211) 00:14:07.611 fused_ordering(212) 00:14:07.611 fused_ordering(213) 00:14:07.611 fused_ordering(214) 00:14:07.611 fused_ordering(215) 00:14:07.611 fused_ordering(216) 00:14:07.611 fused_ordering(217) 00:14:07.611 fused_ordering(218) 00:14:07.611 fused_ordering(219) 00:14:07.612 fused_ordering(220) 00:14:07.612 fused_ordering(221) 00:14:07.612 fused_ordering(222) 00:14:07.612 fused_ordering(223) 00:14:07.612 fused_ordering(224) 00:14:07.612 fused_ordering(225) 00:14:07.612 fused_ordering(226) 00:14:07.612 fused_ordering(227) 00:14:07.612 fused_ordering(228) 00:14:07.612 fused_ordering(229) 00:14:07.612 fused_ordering(230) 00:14:07.612 fused_ordering(231) 00:14:07.612 fused_ordering(232) 00:14:07.612 fused_ordering(233) 00:14:07.612 fused_ordering(234) 00:14:07.612 fused_ordering(235) 00:14:07.612 fused_ordering(236) 00:14:07.612 fused_ordering(237) 00:14:07.612 fused_ordering(238) 00:14:07.612 fused_ordering(239) 00:14:07.612 fused_ordering(240) 00:14:07.612 fused_ordering(241) 00:14:07.612 fused_ordering(242) 00:14:07.612 fused_ordering(243) 00:14:07.612 fused_ordering(244) 00:14:07.612 fused_ordering(245) 00:14:07.612 fused_ordering(246) 00:14:07.612 fused_ordering(247) 00:14:07.612 fused_ordering(248) 00:14:07.612 fused_ordering(249) 00:14:07.612 fused_ordering(250) 00:14:07.612 fused_ordering(251) 00:14:07.612 fused_ordering(252) 00:14:07.612 fused_ordering(253) 00:14:07.612 fused_ordering(254) 00:14:07.612 fused_ordering(255) 00:14:07.612 fused_ordering(256) 00:14:07.612 fused_ordering(257) 00:14:07.612 fused_ordering(258) 00:14:07.612 fused_ordering(259) 00:14:07.612 fused_ordering(260) 00:14:07.612 fused_ordering(261) 00:14:07.612 fused_ordering(262) 00:14:07.612 fused_ordering(263) 00:14:07.612 fused_ordering(264) 00:14:07.612 fused_ordering(265) 00:14:07.612 fused_ordering(266) 00:14:07.612 fused_ordering(267) 00:14:07.612 fused_ordering(268) 00:14:07.612 fused_ordering(269) 00:14:07.612 fused_ordering(270) 00:14:07.612 fused_ordering(271) 00:14:07.612 fused_ordering(272) 00:14:07.612 fused_ordering(273) 00:14:07.612 fused_ordering(274) 00:14:07.612 fused_ordering(275) 00:14:07.612 fused_ordering(276) 00:14:07.612 fused_ordering(277) 00:14:07.612 fused_ordering(278) 00:14:07.612 fused_ordering(279) 00:14:07.612 fused_ordering(280) 00:14:07.612 fused_ordering(281) 00:14:07.612 fused_ordering(282) 00:14:07.612 fused_ordering(283) 00:14:07.612 fused_ordering(284) 00:14:07.612 fused_ordering(285) 00:14:07.612 fused_ordering(286) 00:14:07.612 fused_ordering(287) 00:14:07.612 fused_ordering(288) 00:14:07.612 fused_ordering(289) 00:14:07.612 fused_ordering(290) 00:14:07.612 fused_ordering(291) 00:14:07.612 fused_ordering(292) 00:14:07.612 fused_ordering(293) 00:14:07.612 fused_ordering(294) 00:14:07.612 fused_ordering(295) 00:14:07.612 fused_ordering(296) 00:14:07.612 fused_ordering(297) 00:14:07.612 fused_ordering(298) 00:14:07.612 fused_ordering(299) 00:14:07.612 fused_ordering(300) 00:14:07.612 fused_ordering(301) 00:14:07.612 fused_ordering(302) 00:14:07.612 fused_ordering(303) 00:14:07.612 fused_ordering(304) 00:14:07.612 fused_ordering(305) 00:14:07.612 fused_ordering(306) 00:14:07.612 fused_ordering(307) 00:14:07.612 fused_ordering(308) 00:14:07.612 fused_ordering(309) 00:14:07.612 fused_ordering(310) 00:14:07.612 fused_ordering(311) 00:14:07.612 fused_ordering(312) 00:14:07.612 fused_ordering(313) 00:14:07.612 fused_ordering(314) 00:14:07.612 fused_ordering(315) 00:14:07.612 fused_ordering(316) 00:14:07.612 fused_ordering(317) 00:14:07.612 fused_ordering(318) 00:14:07.612 fused_ordering(319) 00:14:07.612 fused_ordering(320) 00:14:07.612 fused_ordering(321) 00:14:07.612 fused_ordering(322) 00:14:07.612 fused_ordering(323) 00:14:07.612 fused_ordering(324) 00:14:07.612 fused_ordering(325) 00:14:07.612 fused_ordering(326) 00:14:07.612 fused_ordering(327) 00:14:07.612 fused_ordering(328) 00:14:07.612 fused_ordering(329) 00:14:07.612 fused_ordering(330) 00:14:07.612 fused_ordering(331) 00:14:07.612 fused_ordering(332) 00:14:07.612 fused_ordering(333) 00:14:07.612 fused_ordering(334) 00:14:07.612 fused_ordering(335) 00:14:07.612 fused_ordering(336) 00:14:07.612 fused_ordering(337) 00:14:07.612 fused_ordering(338) 00:14:07.612 fused_ordering(339) 00:14:07.612 fused_ordering(340) 00:14:07.612 fused_ordering(341) 00:14:07.612 fused_ordering(342) 00:14:07.612 fused_ordering(343) 00:14:07.612 fused_ordering(344) 00:14:07.612 fused_ordering(345) 00:14:07.612 fused_ordering(346) 00:14:07.612 fused_ordering(347) 00:14:07.612 fused_ordering(348) 00:14:07.612 fused_ordering(349) 00:14:07.612 fused_ordering(350) 00:14:07.612 fused_ordering(351) 00:14:07.612 fused_ordering(352) 00:14:07.612 fused_ordering(353) 00:14:07.612 fused_ordering(354) 00:14:07.612 fused_ordering(355) 00:14:07.612 fused_ordering(356) 00:14:07.612 fused_ordering(357) 00:14:07.612 fused_ordering(358) 00:14:07.612 fused_ordering(359) 00:14:07.612 fused_ordering(360) 00:14:07.612 fused_ordering(361) 00:14:07.612 fused_ordering(362) 00:14:07.612 fused_ordering(363) 00:14:07.612 fused_ordering(364) 00:14:07.612 fused_ordering(365) 00:14:07.612 fused_ordering(366) 00:14:07.612 fused_ordering(367) 00:14:07.612 fused_ordering(368) 00:14:07.612 fused_ordering(369) 00:14:07.612 fused_ordering(370) 00:14:07.612 fused_ordering(371) 00:14:07.612 fused_ordering(372) 00:14:07.612 fused_ordering(373) 00:14:07.612 fused_ordering(374) 00:14:07.612 fused_ordering(375) 00:14:07.612 fused_ordering(376) 00:14:07.612 fused_ordering(377) 00:14:07.612 fused_ordering(378) 00:14:07.612 fused_ordering(379) 00:14:07.612 fused_ordering(380) 00:14:07.612 fused_ordering(381) 00:14:07.612 fused_ordering(382) 00:14:07.612 fused_ordering(383) 00:14:07.612 fused_ordering(384) 00:14:07.612 fused_ordering(385) 00:14:07.612 fused_ordering(386) 00:14:07.612 fused_ordering(387) 00:14:07.612 fused_ordering(388) 00:14:07.612 fused_ordering(389) 00:14:07.612 fused_ordering(390) 00:14:07.612 fused_ordering(391) 00:14:07.612 fused_ordering(392) 00:14:07.612 fused_ordering(393) 00:14:07.612 fused_ordering(394) 00:14:07.612 fused_ordering(395) 00:14:07.612 fused_ordering(396) 00:14:07.612 fused_ordering(397) 00:14:07.612 fused_ordering(398) 00:14:07.612 fused_ordering(399) 00:14:07.612 fused_ordering(400) 00:14:07.612 fused_ordering(401) 00:14:07.612 fused_ordering(402) 00:14:07.612 fused_ordering(403) 00:14:07.612 fused_ordering(404) 00:14:07.612 fused_ordering(405) 00:14:07.612 fused_ordering(406) 00:14:07.612 fused_ordering(407) 00:14:07.612 fused_ordering(408) 00:14:07.612 fused_ordering(409) 00:14:07.612 fused_ordering(410) 00:14:07.871 fused_ordering(411) 00:14:07.871 fused_ordering(412) 00:14:07.871 fused_ordering(413) 00:14:07.871 fused_ordering(414) 00:14:07.871 fused_ordering(415) 00:14:07.871 fused_ordering(416) 00:14:07.871 fused_ordering(417) 00:14:07.871 fused_ordering(418) 00:14:07.871 fused_ordering(419) 00:14:07.871 fused_ordering(420) 00:14:07.871 fused_ordering(421) 00:14:07.871 fused_ordering(422) 00:14:07.871 fused_ordering(423) 00:14:07.871 fused_ordering(424) 00:14:07.871 fused_ordering(425) 00:14:07.871 fused_ordering(426) 00:14:07.871 fused_ordering(427) 00:14:07.871 fused_ordering(428) 00:14:07.871 fused_ordering(429) 00:14:07.871 fused_ordering(430) 00:14:07.871 fused_ordering(431) 00:14:07.871 fused_ordering(432) 00:14:07.871 fused_ordering(433) 00:14:07.871 fused_ordering(434) 00:14:07.871 fused_ordering(435) 00:14:07.871 fused_ordering(436) 00:14:07.871 fused_ordering(437) 00:14:07.871 fused_ordering(438) 00:14:07.871 fused_ordering(439) 00:14:07.871 fused_ordering(440) 00:14:07.871 fused_ordering(441) 00:14:07.871 fused_ordering(442) 00:14:07.871 fused_ordering(443) 00:14:07.871 fused_ordering(444) 00:14:07.871 fused_ordering(445) 00:14:07.871 fused_ordering(446) 00:14:07.871 fused_ordering(447) 00:14:07.871 fused_ordering(448) 00:14:07.871 fused_ordering(449) 00:14:07.871 fused_ordering(450) 00:14:07.871 fused_ordering(451) 00:14:07.871 fused_ordering(452) 00:14:07.871 fused_ordering(453) 00:14:07.871 fused_ordering(454) 00:14:07.871 fused_ordering(455) 00:14:07.871 fused_ordering(456) 00:14:07.871 fused_ordering(457) 00:14:07.871 fused_ordering(458) 00:14:07.871 fused_ordering(459) 00:14:07.871 fused_ordering(460) 00:14:07.871 fused_ordering(461) 00:14:07.871 fused_ordering(462) 00:14:07.871 fused_ordering(463) 00:14:07.871 fused_ordering(464) 00:14:07.871 fused_ordering(465) 00:14:07.871 fused_ordering(466) 00:14:07.871 fused_ordering(467) 00:14:07.871 fused_ordering(468) 00:14:07.871 fused_ordering(469) 00:14:07.871 fused_ordering(470) 00:14:07.871 fused_ordering(471) 00:14:07.871 fused_ordering(472) 00:14:07.871 fused_ordering(473) 00:14:07.871 fused_ordering(474) 00:14:07.871 fused_ordering(475) 00:14:07.871 fused_ordering(476) 00:14:07.871 fused_ordering(477) 00:14:07.871 fused_ordering(478) 00:14:07.871 fused_ordering(479) 00:14:07.871 fused_ordering(480) 00:14:07.871 fused_ordering(481) 00:14:07.871 fused_ordering(482) 00:14:07.871 fused_ordering(483) 00:14:07.871 fused_ordering(484) 00:14:07.871 fused_ordering(485) 00:14:07.871 fused_ordering(486) 00:14:07.871 fused_ordering(487) 00:14:07.871 fused_ordering(488) 00:14:07.871 fused_ordering(489) 00:14:07.871 fused_ordering(490) 00:14:07.871 fused_ordering(491) 00:14:07.871 fused_ordering(492) 00:14:07.871 fused_ordering(493) 00:14:07.871 fused_ordering(494) 00:14:07.871 fused_ordering(495) 00:14:07.871 fused_ordering(496) 00:14:07.871 fused_ordering(497) 00:14:07.871 fused_ordering(498) 00:14:07.871 fused_ordering(499) 00:14:07.871 fused_ordering(500) 00:14:07.871 fused_ordering(501) 00:14:07.871 fused_ordering(502) 00:14:07.871 fused_ordering(503) 00:14:07.871 fused_ordering(504) 00:14:07.872 fused_ordering(505) 00:14:07.872 fused_ordering(506) 00:14:07.872 fused_ordering(507) 00:14:07.872 fused_ordering(508) 00:14:07.872 fused_ordering(509) 00:14:07.872 fused_ordering(510) 00:14:07.872 fused_ordering(511) 00:14:07.872 fused_ordering(512) 00:14:07.872 fused_ordering(513) 00:14:07.872 fused_ordering(514) 00:14:07.872 fused_ordering(515) 00:14:07.872 fused_ordering(516) 00:14:07.872 fused_ordering(517) 00:14:07.872 fused_ordering(518) 00:14:07.872 fused_ordering(519) 00:14:07.872 fused_ordering(520) 00:14:07.872 fused_ordering(521) 00:14:07.872 fused_ordering(522) 00:14:07.872 fused_ordering(523) 00:14:07.872 fused_ordering(524) 00:14:07.872 fused_ordering(525) 00:14:07.872 fused_ordering(526) 00:14:07.872 fused_ordering(527) 00:14:07.872 fused_ordering(528) 00:14:07.872 fused_ordering(529) 00:14:07.872 fused_ordering(530) 00:14:07.872 fused_ordering(531) 00:14:07.872 fused_ordering(532) 00:14:07.872 fused_ordering(533) 00:14:07.872 fused_ordering(534) 00:14:07.872 fused_ordering(535) 00:14:07.872 fused_ordering(536) 00:14:07.872 fused_ordering(537) 00:14:07.872 fused_ordering(538) 00:14:07.872 fused_ordering(539) 00:14:07.872 fused_ordering(540) 00:14:07.872 fused_ordering(541) 00:14:07.872 fused_ordering(542) 00:14:07.872 fused_ordering(543) 00:14:07.872 fused_ordering(544) 00:14:07.872 fused_ordering(545) 00:14:07.872 fused_ordering(546) 00:14:07.872 fused_ordering(547) 00:14:07.872 fused_ordering(548) 00:14:07.872 fused_ordering(549) 00:14:07.872 fused_ordering(550) 00:14:07.872 fused_ordering(551) 00:14:07.872 fused_ordering(552) 00:14:07.872 fused_ordering(553) 00:14:07.872 fused_ordering(554) 00:14:07.872 fused_ordering(555) 00:14:07.872 fused_ordering(556) 00:14:07.872 fused_ordering(557) 00:14:07.872 fused_ordering(558) 00:14:07.872 fused_ordering(559) 00:14:07.872 fused_ordering(560) 00:14:07.872 fused_ordering(561) 00:14:07.872 fused_ordering(562) 00:14:07.872 fused_ordering(563) 00:14:07.872 fused_ordering(564) 00:14:07.872 fused_ordering(565) 00:14:07.872 fused_ordering(566) 00:14:07.872 fused_ordering(567) 00:14:07.872 fused_ordering(568) 00:14:07.872 fused_ordering(569) 00:14:07.872 fused_ordering(570) 00:14:07.872 fused_ordering(571) 00:14:07.872 fused_ordering(572) 00:14:07.872 fused_ordering(573) 00:14:07.872 fused_ordering(574) 00:14:07.872 fused_ordering(575) 00:14:07.872 fused_ordering(576) 00:14:07.872 fused_ordering(577) 00:14:07.872 fused_ordering(578) 00:14:07.872 fused_ordering(579) 00:14:07.872 fused_ordering(580) 00:14:07.872 fused_ordering(581) 00:14:07.872 fused_ordering(582) 00:14:07.872 fused_ordering(583) 00:14:07.872 fused_ordering(584) 00:14:07.872 fused_ordering(585) 00:14:07.872 fused_ordering(586) 00:14:07.872 fused_ordering(587) 00:14:07.872 fused_ordering(588) 00:14:07.872 fused_ordering(589) 00:14:07.872 fused_ordering(590) 00:14:07.872 fused_ordering(591) 00:14:07.872 fused_ordering(592) 00:14:07.872 fused_ordering(593) 00:14:07.872 fused_ordering(594) 00:14:07.872 fused_ordering(595) 00:14:07.872 fused_ordering(596) 00:14:07.872 fused_ordering(597) 00:14:07.872 fused_ordering(598) 00:14:07.872 fused_ordering(599) 00:14:07.872 fused_ordering(600) 00:14:07.872 fused_ordering(601) 00:14:07.872 fused_ordering(602) 00:14:07.872 fused_ordering(603) 00:14:07.872 fused_ordering(604) 00:14:07.872 fused_ordering(605) 00:14:07.872 fused_ordering(606) 00:14:07.872 fused_ordering(607) 00:14:07.872 fused_ordering(608) 00:14:07.872 fused_ordering(609) 00:14:07.872 fused_ordering(610) 00:14:07.872 fused_ordering(611) 00:14:07.872 fused_ordering(612) 00:14:07.872 fused_ordering(613) 00:14:07.872 fused_ordering(614) 00:14:07.872 fused_ordering(615) 00:14:08.131 fused_ordering(616) 00:14:08.132 fused_ordering(617) 00:14:08.132 fused_ordering(618) 00:14:08.132 fused_ordering(619) 00:14:08.132 fused_ordering(620) 00:14:08.132 fused_ordering(621) 00:14:08.132 fused_ordering(622) 00:14:08.132 fused_ordering(623) 00:14:08.132 fused_ordering(624) 00:14:08.132 fused_ordering(625) 00:14:08.132 fused_ordering(626) 00:14:08.132 fused_ordering(627) 00:14:08.132 fused_ordering(628) 00:14:08.132 fused_ordering(629) 00:14:08.132 fused_ordering(630) 00:14:08.132 fused_ordering(631) 00:14:08.132 fused_ordering(632) 00:14:08.132 fused_ordering(633) 00:14:08.132 fused_ordering(634) 00:14:08.132 fused_ordering(635) 00:14:08.132 fused_ordering(636) 00:14:08.132 fused_ordering(637) 00:14:08.132 fused_ordering(638) 00:14:08.132 fused_ordering(639) 00:14:08.132 fused_ordering(640) 00:14:08.132 fused_ordering(641) 00:14:08.132 fused_ordering(642) 00:14:08.132 fused_ordering(643) 00:14:08.132 fused_ordering(644) 00:14:08.132 fused_ordering(645) 00:14:08.132 fused_ordering(646) 00:14:08.132 fused_ordering(647) 00:14:08.132 fused_ordering(648) 00:14:08.132 fused_ordering(649) 00:14:08.132 fused_ordering(650) 00:14:08.132 fused_ordering(651) 00:14:08.132 fused_ordering(652) 00:14:08.132 fused_ordering(653) 00:14:08.132 fused_ordering(654) 00:14:08.132 fused_ordering(655) 00:14:08.132 fused_ordering(656) 00:14:08.132 fused_ordering(657) 00:14:08.132 fused_ordering(658) 00:14:08.132 fused_ordering(659) 00:14:08.132 fused_ordering(660) 00:14:08.132 fused_ordering(661) 00:14:08.132 fused_ordering(662) 00:14:08.132 fused_ordering(663) 00:14:08.132 fused_ordering(664) 00:14:08.132 fused_ordering(665) 00:14:08.132 fused_ordering(666) 00:14:08.132 fused_ordering(667) 00:14:08.132 fused_ordering(668) 00:14:08.132 fused_ordering(669) 00:14:08.132 fused_ordering(670) 00:14:08.132 fused_ordering(671) 00:14:08.132 fused_ordering(672) 00:14:08.132 fused_ordering(673) 00:14:08.132 fused_ordering(674) 00:14:08.132 fused_ordering(675) 00:14:08.132 fused_ordering(676) 00:14:08.132 fused_ordering(677) 00:14:08.132 fused_ordering(678) 00:14:08.132 fused_ordering(679) 00:14:08.132 fused_ordering(680) 00:14:08.132 fused_ordering(681) 00:14:08.132 fused_ordering(682) 00:14:08.132 fused_ordering(683) 00:14:08.132 fused_ordering(684) 00:14:08.132 fused_ordering(685) 00:14:08.132 fused_ordering(686) 00:14:08.132 fused_ordering(687) 00:14:08.132 fused_ordering(688) 00:14:08.132 fused_ordering(689) 00:14:08.132 fused_ordering(690) 00:14:08.132 fused_ordering(691) 00:14:08.132 fused_ordering(692) 00:14:08.132 fused_ordering(693) 00:14:08.132 fused_ordering(694) 00:14:08.132 fused_ordering(695) 00:14:08.132 fused_ordering(696) 00:14:08.132 fused_ordering(697) 00:14:08.132 fused_ordering(698) 00:14:08.132 fused_ordering(699) 00:14:08.132 fused_ordering(700) 00:14:08.132 fused_ordering(701) 00:14:08.132 fused_ordering(702) 00:14:08.132 fused_ordering(703) 00:14:08.132 fused_ordering(704) 00:14:08.132 fused_ordering(705) 00:14:08.132 fused_ordering(706) 00:14:08.132 fused_ordering(707) 00:14:08.132 fused_ordering(708) 00:14:08.132 fused_ordering(709) 00:14:08.132 fused_ordering(710) 00:14:08.132 fused_ordering(711) 00:14:08.132 fused_ordering(712) 00:14:08.132 fused_ordering(713) 00:14:08.132 fused_ordering(714) 00:14:08.132 fused_ordering(715) 00:14:08.132 fused_ordering(716) 00:14:08.132 fused_ordering(717) 00:14:08.132 fused_ordering(718) 00:14:08.132 fused_ordering(719) 00:14:08.132 fused_ordering(720) 00:14:08.132 fused_ordering(721) 00:14:08.132 fused_ordering(722) 00:14:08.132 fused_ordering(723) 00:14:08.132 fused_ordering(724) 00:14:08.132 fused_ordering(725) 00:14:08.132 fused_ordering(726) 00:14:08.132 fused_ordering(727) 00:14:08.132 fused_ordering(728) 00:14:08.132 fused_ordering(729) 00:14:08.132 fused_ordering(730) 00:14:08.132 fused_ordering(731) 00:14:08.132 fused_ordering(732) 00:14:08.132 fused_ordering(733) 00:14:08.132 fused_ordering(734) 00:14:08.132 fused_ordering(735) 00:14:08.132 fused_ordering(736) 00:14:08.132 fused_ordering(737) 00:14:08.132 fused_ordering(738) 00:14:08.132 fused_ordering(739) 00:14:08.132 fused_ordering(740) 00:14:08.132 fused_ordering(741) 00:14:08.132 fused_ordering(742) 00:14:08.132 fused_ordering(743) 00:14:08.132 fused_ordering(744) 00:14:08.132 fused_ordering(745) 00:14:08.132 fused_ordering(746) 00:14:08.132 fused_ordering(747) 00:14:08.132 fused_ordering(748) 00:14:08.132 fused_ordering(749) 00:14:08.132 fused_ordering(750) 00:14:08.132 fused_ordering(751) 00:14:08.132 fused_ordering(752) 00:14:08.132 fused_ordering(753) 00:14:08.132 fused_ordering(754) 00:14:08.132 fused_ordering(755) 00:14:08.132 fused_ordering(756) 00:14:08.132 fused_ordering(757) 00:14:08.132 fused_ordering(758) 00:14:08.132 fused_ordering(759) 00:14:08.132 fused_ordering(760) 00:14:08.132 fused_ordering(761) 00:14:08.132 fused_ordering(762) 00:14:08.132 fused_ordering(763) 00:14:08.132 fused_ordering(764) 00:14:08.132 fused_ordering(765) 00:14:08.132 fused_ordering(766) 00:14:08.132 fused_ordering(767) 00:14:08.132 fused_ordering(768) 00:14:08.132 fused_ordering(769) 00:14:08.132 fused_ordering(770) 00:14:08.132 fused_ordering(771) 00:14:08.132 fused_ordering(772) 00:14:08.132 fused_ordering(773) 00:14:08.132 fused_ordering(774) 00:14:08.132 fused_ordering(775) 00:14:08.132 fused_ordering(776) 00:14:08.132 fused_ordering(777) 00:14:08.132 fused_ordering(778) 00:14:08.132 fused_ordering(779) 00:14:08.132 fused_ordering(780) 00:14:08.132 fused_ordering(781) 00:14:08.132 fused_ordering(782) 00:14:08.132 fused_ordering(783) 00:14:08.132 fused_ordering(784) 00:14:08.132 fused_ordering(785) 00:14:08.132 fused_ordering(786) 00:14:08.132 fused_ordering(787) 00:14:08.132 fused_ordering(788) 00:14:08.132 fused_ordering(789) 00:14:08.132 fused_ordering(790) 00:14:08.132 fused_ordering(791) 00:14:08.132 fused_ordering(792) 00:14:08.132 fused_ordering(793) 00:14:08.132 fused_ordering(794) 00:14:08.132 fused_ordering(795) 00:14:08.132 fused_ordering(796) 00:14:08.132 fused_ordering(797) 00:14:08.132 fused_ordering(798) 00:14:08.132 fused_ordering(799) 00:14:08.132 fused_ordering(800) 00:14:08.132 fused_ordering(801) 00:14:08.132 fused_ordering(802) 00:14:08.132 fused_ordering(803) 00:14:08.132 fused_ordering(804) 00:14:08.132 fused_ordering(805) 00:14:08.132 fused_ordering(806) 00:14:08.132 fused_ordering(807) 00:14:08.132 fused_ordering(808) 00:14:08.132 fused_ordering(809) 00:14:08.132 fused_ordering(810) 00:14:08.132 fused_ordering(811) 00:14:08.132 fused_ordering(812) 00:14:08.132 fused_ordering(813) 00:14:08.132 fused_ordering(814) 00:14:08.132 fused_ordering(815) 00:14:08.132 fused_ordering(816) 00:14:08.132 fused_ordering(817) 00:14:08.132 fused_ordering(818) 00:14:08.132 fused_ordering(819) 00:14:08.132 fused_ordering(820) 00:14:08.701 fused_ordering(821) 00:14:08.701 fused_ordering(822) 00:14:08.701 fused_ordering(823) 00:14:08.701 fused_ordering(824) 00:14:08.701 fused_ordering(825) 00:14:08.701 fused_ordering(826) 00:14:08.701 fused_ordering(827) 00:14:08.701 fused_ordering(828) 00:14:08.701 fused_ordering(829) 00:14:08.701 fused_ordering(830) 00:14:08.701 fused_ordering(831) 00:14:08.701 fused_ordering(832) 00:14:08.701 fused_ordering(833) 00:14:08.701 fused_ordering(834) 00:14:08.701 fused_ordering(835) 00:14:08.701 fused_ordering(836) 00:14:08.701 fused_ordering(837) 00:14:08.701 fused_ordering(838) 00:14:08.701 fused_ordering(839) 00:14:08.701 fused_ordering(840) 00:14:08.701 fused_ordering(841) 00:14:08.701 fused_ordering(842) 00:14:08.701 fused_ordering(843) 00:14:08.701 fused_ordering(844) 00:14:08.701 fused_ordering(845) 00:14:08.701 fused_ordering(846) 00:14:08.701 fused_ordering(847) 00:14:08.701 fused_ordering(848) 00:14:08.701 fused_ordering(849) 00:14:08.701 fused_ordering(850) 00:14:08.701 fused_ordering(851) 00:14:08.701 fused_ordering(852) 00:14:08.701 fused_ordering(853) 00:14:08.701 fused_ordering(854) 00:14:08.701 fused_ordering(855) 00:14:08.701 fused_ordering(856) 00:14:08.701 fused_ordering(857) 00:14:08.701 fused_ordering(858) 00:14:08.701 fused_ordering(859) 00:14:08.702 fused_ordering(860) 00:14:08.702 fused_ordering(861) 00:14:08.702 fused_ordering(862) 00:14:08.702 fused_ordering(863) 00:14:08.702 fused_ordering(864) 00:14:08.702 fused_ordering(865) 00:14:08.702 fused_ordering(866) 00:14:08.702 fused_ordering(867) 00:14:08.702 fused_ordering(868) 00:14:08.702 fused_ordering(869) 00:14:08.702 fused_ordering(870) 00:14:08.702 fused_ordering(871) 00:14:08.702 fused_ordering(872) 00:14:08.702 fused_ordering(873) 00:14:08.702 fused_ordering(874) 00:14:08.702 fused_ordering(875) 00:14:08.702 fused_ordering(876) 00:14:08.702 fused_ordering(877) 00:14:08.702 fused_ordering(878) 00:14:08.702 fused_ordering(879) 00:14:08.702 fused_ordering(880) 00:14:08.702 fused_ordering(881) 00:14:08.702 fused_ordering(882) 00:14:08.702 fused_ordering(883) 00:14:08.702 fused_ordering(884) 00:14:08.702 fused_ordering(885) 00:14:08.702 fused_ordering(886) 00:14:08.702 fused_ordering(887) 00:14:08.702 fused_ordering(888) 00:14:08.702 fused_ordering(889) 00:14:08.702 fused_ordering(890) 00:14:08.702 fused_ordering(891) 00:14:08.702 fused_ordering(892) 00:14:08.702 fused_ordering(893) 00:14:08.702 fused_ordering(894) 00:14:08.702 fused_ordering(895) 00:14:08.702 fused_ordering(896) 00:14:08.702 fused_ordering(897) 00:14:08.702 fused_ordering(898) 00:14:08.702 fused_ordering(899) 00:14:08.702 fused_ordering(900) 00:14:08.702 fused_ordering(901) 00:14:08.702 fused_ordering(902) 00:14:08.702 fused_ordering(903) 00:14:08.702 fused_ordering(904) 00:14:08.702 fused_ordering(905) 00:14:08.702 fused_ordering(906) 00:14:08.702 fused_ordering(907) 00:14:08.702 fused_ordering(908) 00:14:08.702 fused_ordering(909) 00:14:08.702 fused_ordering(910) 00:14:08.702 fused_ordering(911) 00:14:08.702 fused_ordering(912) 00:14:08.702 fused_ordering(913) 00:14:08.702 fused_ordering(914) 00:14:08.702 fused_ordering(915) 00:14:08.702 fused_ordering(916) 00:14:08.702 fused_ordering(917) 00:14:08.702 fused_ordering(918) 00:14:08.702 fused_ordering(919) 00:14:08.702 fused_ordering(920) 00:14:08.702 fused_ordering(921) 00:14:08.702 fused_ordering(922) 00:14:08.702 fused_ordering(923) 00:14:08.702 fused_ordering(924) 00:14:08.702 fused_ordering(925) 00:14:08.702 fused_ordering(926) 00:14:08.702 fused_ordering(927) 00:14:08.702 fused_ordering(928) 00:14:08.702 fused_ordering(929) 00:14:08.702 fused_ordering(930) 00:14:08.702 fused_ordering(931) 00:14:08.702 fused_ordering(932) 00:14:08.702 fused_ordering(933) 00:14:08.702 fused_ordering(934) 00:14:08.702 fused_ordering(935) 00:14:08.702 fused_ordering(936) 00:14:08.702 fused_ordering(937) 00:14:08.702 fused_ordering(938) 00:14:08.702 fused_ordering(939) 00:14:08.702 fused_ordering(940) 00:14:08.702 fused_ordering(941) 00:14:08.702 fused_ordering(942) 00:14:08.702 fused_ordering(943) 00:14:08.702 fused_ordering(944) 00:14:08.702 fused_ordering(945) 00:14:08.702 fused_ordering(946) 00:14:08.702 fused_ordering(947) 00:14:08.702 fused_ordering(948) 00:14:08.702 fused_ordering(949) 00:14:08.702 fused_ordering(950) 00:14:08.702 fused_ordering(951) 00:14:08.702 fused_ordering(952) 00:14:08.702 fused_ordering(953) 00:14:08.702 fused_ordering(954) 00:14:08.702 fused_ordering(955) 00:14:08.702 fused_ordering(956) 00:14:08.702 fused_ordering(957) 00:14:08.702 fused_ordering(958) 00:14:08.702 fused_ordering(959) 00:14:08.702 fused_ordering(960) 00:14:08.702 fused_ordering(961) 00:14:08.702 fused_ordering(962) 00:14:08.702 fused_ordering(963) 00:14:08.702 fused_ordering(964) 00:14:08.702 fused_ordering(965) 00:14:08.702 fused_ordering(966) 00:14:08.702 fused_ordering(967) 00:14:08.702 fused_ordering(968) 00:14:08.702 fused_ordering(969) 00:14:08.702 fused_ordering(970) 00:14:08.702 fused_ordering(971) 00:14:08.702 fused_ordering(972) 00:14:08.702 fused_ordering(973) 00:14:08.702 fused_ordering(974) 00:14:08.702 fused_ordering(975) 00:14:08.702 fused_ordering(976) 00:14:08.702 fused_ordering(977) 00:14:08.702 fused_ordering(978) 00:14:08.702 fused_ordering(979) 00:14:08.702 fused_ordering(980) 00:14:08.702 fused_ordering(981) 00:14:08.702 fused_ordering(982) 00:14:08.702 fused_ordering(983) 00:14:08.702 fused_ordering(984) 00:14:08.702 fused_ordering(985) 00:14:08.702 fused_ordering(986) 00:14:08.702 fused_ordering(987) 00:14:08.702 fused_ordering(988) 00:14:08.702 fused_ordering(989) 00:14:08.702 fused_ordering(990) 00:14:08.702 fused_ordering(991) 00:14:08.702 fused_ordering(992) 00:14:08.702 fused_ordering(993) 00:14:08.702 fused_ordering(994) 00:14:08.702 fused_ordering(995) 00:14:08.702 fused_ordering(996) 00:14:08.702 fused_ordering(997) 00:14:08.702 fused_ordering(998) 00:14:08.702 fused_ordering(999) 00:14:08.702 fused_ordering(1000) 00:14:08.702 fused_ordering(1001) 00:14:08.702 fused_ordering(1002) 00:14:08.702 fused_ordering(1003) 00:14:08.702 fused_ordering(1004) 00:14:08.702 fused_ordering(1005) 00:14:08.702 fused_ordering(1006) 00:14:08.702 fused_ordering(1007) 00:14:08.702 fused_ordering(1008) 00:14:08.702 fused_ordering(1009) 00:14:08.702 fused_ordering(1010) 00:14:08.702 fused_ordering(1011) 00:14:08.702 fused_ordering(1012) 00:14:08.702 fused_ordering(1013) 00:14:08.702 fused_ordering(1014) 00:14:08.702 fused_ordering(1015) 00:14:08.702 fused_ordering(1016) 00:14:08.702 fused_ordering(1017) 00:14:08.702 fused_ordering(1018) 00:14:08.702 fused_ordering(1019) 00:14:08.702 fused_ordering(1020) 00:14:08.702 fused_ordering(1021) 00:14:08.702 fused_ordering(1022) 00:14:08.702 fused_ordering(1023) 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:08.702 rmmod nvme_tcp 00:14:08.702 rmmod nvme_fabrics 00:14:08.702 rmmod nvme_keyring 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1713199 ']' 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1713199 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1713199 ']' 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1713199 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.702 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713199 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713199' 00:14:08.961 killing process with pid 1713199 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1713199 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1713199 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.961 05:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.499 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:11.499 00:14:11.499 real 0m10.671s 00:14:11.499 user 0m4.823s 00:14:11.499 sys 0m5.944s 00:14:11.499 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.499 05:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 ************************************ 00:14:11.499 END TEST nvmf_fused_ordering 00:14:11.499 ************************************ 00:14:11.499 05:35:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:11.499 05:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:11.499 05:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.499 05:35:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.499 ************************************ 00:14:11.499 START TEST nvmf_ns_masking 00:14:11.499 ************************************ 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:11.499 * Looking for test storage... 00:14:11.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:11.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.499 --rc genhtml_branch_coverage=1 00:14:11.499 --rc genhtml_function_coverage=1 00:14:11.499 --rc genhtml_legend=1 00:14:11.499 --rc geninfo_all_blocks=1 00:14:11.499 --rc geninfo_unexecuted_blocks=1 00:14:11.499 00:14:11.499 ' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:11.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.499 --rc genhtml_branch_coverage=1 00:14:11.499 --rc genhtml_function_coverage=1 00:14:11.499 --rc genhtml_legend=1 00:14:11.499 --rc geninfo_all_blocks=1 00:14:11.499 --rc geninfo_unexecuted_blocks=1 00:14:11.499 00:14:11.499 ' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:11.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.499 --rc genhtml_branch_coverage=1 00:14:11.499 --rc genhtml_function_coverage=1 00:14:11.499 --rc genhtml_legend=1 00:14:11.499 --rc geninfo_all_blocks=1 00:14:11.499 --rc geninfo_unexecuted_blocks=1 00:14:11.499 00:14:11.499 ' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:11.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.499 --rc genhtml_branch_coverage=1 00:14:11.499 --rc genhtml_function_coverage=1 00:14:11.499 --rc genhtml_legend=1 00:14:11.499 --rc geninfo_all_blocks=1 00:14:11.499 --rc geninfo_unexecuted_blocks=1 00:14:11.499 00:14:11.499 ' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.499 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5604d111-5701-4b75-9492-6b285671ca94 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=19ef13e7-69eb-453f-a0d8-f3aecd8b876b 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7eebfd4f-ab49-4de5-88f9-2d84f7c74512 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:11.500 05:35:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:18.076 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:18.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:18.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:18.077 Found net devices under 0000:86:00.0: cvl_0_0 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:18.077 Found net devices under 0000:86:00.1: cvl_0_1 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.077 05:36:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:14:18.077 00:14:18.077 --- 10.0.0.2 ping statistics --- 00:14:18.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.077 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:14:18.077 00:14:18.077 --- 10.0.0.1 ping statistics --- 00:14:18.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.077 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.077 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1717118 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1717118 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1717118 ']' 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.078 [2024-11-27 05:36:05.251869] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:18.078 [2024-11-27 05:36:05.251925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.078 [2024-11-27 05:36:05.330889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.078 [2024-11-27 05:36:05.374371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.078 [2024-11-27 05:36:05.374408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.078 [2024-11-27 05:36:05.374415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.078 [2024-11-27 05:36:05.374422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.078 [2024-11-27 05:36:05.374427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.078 [2024-11-27 05:36:05.375009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.078 [2024-11-27 05:36:05.692499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:18.078 Malloc1 00:14:18.078 05:36:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:18.338 Malloc2 00:14:18.338 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:18.338 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:18.597 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.856 [2024-11-27 05:36:06.675915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.856 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:18.856 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7eebfd4f-ab49-4de5-88f9-2d84f7c74512 -a 10.0.0.2 -s 4420 -i 4 00:14:18.856 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.856 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:18.856 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.856 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:18.856 05:36:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.391 [ 0]:0x1 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=374a93aa2bc1465ead734a3b026f45d1 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 374a93aa2bc1465ead734a3b026f45d1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.391 05:36:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.391 [ 0]:0x1 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=374a93aa2bc1465ead734a3b026f45d1 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 374a93aa2bc1465ead734a3b026f45d1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.391 [ 1]:0x2 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f984c64651142ffb560cc2b81d1bfea 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f984c64651142ffb560cc2b81d1bfea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.391 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.649 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7eebfd4f-ab49-4de5-88f9-2d84f7c74512 -a 10.0.0.2 -s 4420 -i 4 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:21.908 05:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.442 05:36:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.442 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.442 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.442 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:24.442 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.442 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.443 [ 0]:0x2 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f984c64651142ffb560cc2b81d1bfea 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f984c64651142ffb560cc2b81d1bfea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.443 [ 0]:0x1 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=374a93aa2bc1465ead734a3b026f45d1 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 374a93aa2bc1465ead734a3b026f45d1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.443 [ 1]:0x2 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.443 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f984c64651142ffb560cc2b81d1bfea 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f984c64651142ffb560cc2b81d1bfea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:24.701 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:24.959 [ 0]:0x2 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f984c64651142ffb560cc2b81d1bfea 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f984c64651142ffb560cc2b81d1bfea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.959 05:36:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7eebfd4f-ab49-4de5-88f9-2d84f7c74512 -a 10.0.0.2 -s 4420 -i 4 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:25.217 05:36:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:27.752 [ 0]:0x1 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=374a93aa2bc1465ead734a3b026f45d1 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 374a93aa2bc1465ead734a3b026f45d1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:27.752 [ 1]:0x2 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f984c64651142ffb560cc2b81d1bfea 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f984c64651142ffb560cc2b81d1bfea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:27.752 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:28.011 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:28.011 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.011 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.011 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.011 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.011 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.012 [ 0]:0x2 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f984c64651142ffb560cc2b81d1bfea 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f984c64651142ffb560cc2b81d1bfea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:28.012 05:36:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:28.271 [2024-11-27 05:36:16.054987] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:28.271 request: 00:14:28.271 { 00:14:28.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.271 "nsid": 2, 00:14:28.271 "host": "nqn.2016-06.io.spdk:host1", 00:14:28.271 "method": "nvmf_ns_remove_host", 00:14:28.271 "req_id": 1 00:14:28.271 } 00:14:28.271 Got JSON-RPC error response 00:14:28.271 response: 00:14:28.271 { 00:14:28.271 "code": -32602, 00:14:28.271 "message": "Invalid parameters" 00:14:28.271 } 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:28.271 [ 0]:0x2 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6f984c64651142ffb560cc2b81d1bfea 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6f984c64651142ffb560cc2b81d1bfea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:28.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1719493 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1719493 /var/tmp/host.sock 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1719493 ']' 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:28.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.271 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:28.531 [2024-11-27 05:36:16.286290] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:28.531 [2024-11-27 05:36:16.286332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719493 ] 00:14:28.531 [2024-11-27 05:36:16.358283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.531 [2024-11-27 05:36:16.398881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.790 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.790 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:28.790 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.048 05:36:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.048 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5604d111-5701-4b75-9492-6b285671ca94 00:14:29.048 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:29.048 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5604D11157014B7594926B285671CA94 -i 00:14:29.307 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 19ef13e7-69eb-453f-a0d8-f3aecd8b876b 00:14:29.307 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:29.307 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 19EF13E769EB453FA0D8F3AECD8B876B -i 00:14:29.566 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.825 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:29.825 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:29.825 05:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:30.083 nvme0n1 00:14:30.343 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:30.344 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:30.603 nvme1n2 00:14:30.603 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:30.603 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:30.603 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:30.603 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:30.603 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:30.862 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:30.862 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:30.862 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:30.862 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:31.121 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5604d111-5701-4b75-9492-6b285671ca94 == \5\6\0\4\d\1\1\1\-\5\7\0\1\-\4\b\7\5\-\9\4\9\2\-\6\b\2\8\5\6\7\1\c\a\9\4 ]] 00:14:31.121 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:31.121 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:31.121 05:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:31.121 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 19ef13e7-69eb-453f-a0d8-f3aecd8b876b == \1\9\e\f\1\3\e\7\-\6\9\e\b\-\4\5\3\f\-\a\0\d\8\-\f\3\a\e\c\d\8\b\8\7\6\b ]] 00:14:31.121 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.381 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 5604d111-5701-4b75-9492-6b285671ca94 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5604D11157014B7594926B285671CA94 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5604D11157014B7594926B285671CA94 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.641 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:31.642 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5604D11157014B7594926B285671CA94 00:14:31.901 [2024-11-27 05:36:19.656911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:31.901 [2024-11-27 05:36:19.656946] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:31.901 [2024-11-27 05:36:19.656965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:31.901 request: 00:14:31.901 { 00:14:31.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.901 "namespace": { 00:14:31.901 "bdev_name": "invalid", 00:14:31.901 "nsid": 1, 00:14:31.901 "nguid": "5604D11157014B7594926B285671CA94", 00:14:31.901 "no_auto_visible": false, 00:14:31.901 "hide_metadata": false 00:14:31.901 }, 00:14:31.901 "method": "nvmf_subsystem_add_ns", 00:14:31.901 "req_id": 1 00:14:31.901 } 00:14:31.901 Got JSON-RPC error response 00:14:31.901 response: 00:14:31.901 { 00:14:31.901 "code": -32602, 00:14:31.901 "message": "Invalid parameters" 00:14:31.901 } 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 5604d111-5701-4b75-9492-6b285671ca94 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5604D11157014B7594926B285671CA94 -i 00:14:31.901 05:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:34.438 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:34.438 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:34.438 05:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1719493 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1719493 ']' 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1719493 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719493 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719493' 00:14:34.438 killing process with pid 1719493 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1719493 00:14:34.438 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1719493 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.697 rmmod nvme_tcp 00:14:34.697 rmmod nvme_fabrics 00:14:34.697 rmmod nvme_keyring 00:14:34.697 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1717118 ']' 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1717118 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1717118 ']' 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1717118 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1717118 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1717118' 00:14:34.957 killing process with pid 1717118 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1717118 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1717118 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.957 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:35.215 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:35.215 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:35.215 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.215 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.215 05:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:37.121 00:14:37.121 real 0m26.007s 00:14:37.121 user 0m31.117s 00:14:37.121 sys 0m6.989s 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:37.121 ************************************ 00:14:37.121 END TEST nvmf_ns_masking 00:14:37.121 ************************************ 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.121 ************************************ 00:14:37.121 START TEST nvmf_nvme_cli 00:14:37.121 ************************************ 00:14:37.121 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:37.381 * Looking for test storage... 00:14:37.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:37.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.381 --rc genhtml_branch_coverage=1 00:14:37.381 --rc genhtml_function_coverage=1 00:14:37.381 --rc genhtml_legend=1 00:14:37.381 --rc geninfo_all_blocks=1 00:14:37.381 --rc geninfo_unexecuted_blocks=1 00:14:37.381 00:14:37.381 ' 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:37.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.381 --rc genhtml_branch_coverage=1 00:14:37.381 --rc genhtml_function_coverage=1 00:14:37.381 --rc genhtml_legend=1 00:14:37.381 --rc geninfo_all_blocks=1 00:14:37.381 --rc geninfo_unexecuted_blocks=1 00:14:37.381 00:14:37.381 ' 00:14:37.381 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:37.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.381 --rc genhtml_branch_coverage=1 00:14:37.381 --rc genhtml_function_coverage=1 00:14:37.381 --rc genhtml_legend=1 00:14:37.381 --rc geninfo_all_blocks=1 00:14:37.381 --rc geninfo_unexecuted_blocks=1 00:14:37.381 00:14:37.381 ' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:37.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.382 --rc genhtml_branch_coverage=1 00:14:37.382 --rc genhtml_function_coverage=1 00:14:37.382 --rc genhtml_legend=1 00:14:37.382 --rc geninfo_all_blocks=1 00:14:37.382 --rc geninfo_unexecuted_blocks=1 00:14:37.382 00:14:37.382 ' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:37.382 05:36:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:43.956 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:43.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.957 05:36:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:43.957 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:43.957 Found net devices under 0000:86:00.0: cvl_0_0 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:43.957 Found net devices under 0000:86:00.1: cvl_0_1 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:43.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:14:43.957 00:14:43.957 --- 10.0.0.2 ping statistics --- 00:14:43.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.957 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:14:43.957 00:14:43.957 --- 10.0.0.1 ping statistics --- 00:14:43.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.957 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1724210 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1724210 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1724210 ']' 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.957 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 [2024-11-27 05:36:31.343879] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:43.958 [2024-11-27 05:36:31.343922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.958 [2024-11-27 05:36:31.422958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.958 [2024-11-27 05:36:31.465917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.958 [2024-11-27 05:36:31.465956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.958 [2024-11-27 05:36:31.465964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.958 [2024-11-27 05:36:31.465969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.958 [2024-11-27 05:36:31.465974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.958 [2024-11-27 05:36:31.467424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.958 [2024-11-27 05:36:31.467530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.958 [2024-11-27 05:36:31.467634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.958 [2024-11-27 05:36:31.467635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 [2024-11-27 05:36:31.608899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 Malloc0 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 Malloc1 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 [2024-11-27 05:36:31.718144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:43.958 00:14:43.958 Discovery Log Number of Records 2, Generation counter 2 00:14:43.958 =====Discovery Log Entry 0====== 00:14:43.958 trtype: tcp 00:14:43.958 adrfam: ipv4 00:14:43.958 subtype: current discovery subsystem 00:14:43.958 treq: not required 00:14:43.958 portid: 0 00:14:43.958 trsvcid: 4420 00:14:43.958 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:43.958 traddr: 10.0.0.2 00:14:43.958 eflags: explicit discovery connections, duplicate discovery information 00:14:43.958 sectype: none 00:14:43.958 =====Discovery Log Entry 1====== 00:14:43.958 trtype: tcp 00:14:43.958 adrfam: ipv4 00:14:43.958 subtype: nvme subsystem 00:14:43.958 treq: not required 00:14:43.958 portid: 0 00:14:43.958 trsvcid: 4420 00:14:43.958 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:43.958 traddr: 10.0.0.2 00:14:43.958 eflags: none 00:14:43.958 sectype: none 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:43.958 05:36:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:45.337 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:45.337 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:45.337 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.337 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:45.337 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:45.337 05:36:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:47.242 /dev/nvme0n2 ]] 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:47.242 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:47.243 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.243 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:47.501 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.760 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.760 rmmod nvme_tcp 00:14:47.760 rmmod nvme_fabrics 00:14:48.019 rmmod nvme_keyring 00:14:48.019 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.019 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:48.019 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:48.019 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1724210 ']' 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1724210 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1724210 ']' 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1724210 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1724210 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1724210' 00:14:48.020 killing process with pid 1724210 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1724210 00:14:48.020 05:36:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1724210 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.279 05:36:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:50.187 00:14:50.187 real 0m13.029s 00:14:50.187 user 0m20.010s 00:14:50.187 sys 0m5.075s 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:50.187 ************************************ 00:14:50.187 END TEST nvmf_nvme_cli 00:14:50.187 ************************************ 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.187 05:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.448 ************************************ 00:14:50.448 START TEST nvmf_vfio_user 00:14:50.448 ************************************ 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:50.448 * Looking for test storage... 00:14:50.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:50.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.448 --rc genhtml_branch_coverage=1 00:14:50.448 --rc genhtml_function_coverage=1 00:14:50.448 --rc genhtml_legend=1 00:14:50.448 --rc geninfo_all_blocks=1 00:14:50.448 --rc geninfo_unexecuted_blocks=1 00:14:50.448 00:14:50.448 ' 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:50.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.448 --rc genhtml_branch_coverage=1 00:14:50.448 --rc genhtml_function_coverage=1 00:14:50.448 --rc genhtml_legend=1 00:14:50.448 --rc geninfo_all_blocks=1 00:14:50.448 --rc geninfo_unexecuted_blocks=1 00:14:50.448 00:14:50.448 ' 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:50.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.448 --rc genhtml_branch_coverage=1 00:14:50.448 --rc genhtml_function_coverage=1 00:14:50.448 --rc genhtml_legend=1 00:14:50.448 --rc geninfo_all_blocks=1 00:14:50.448 --rc geninfo_unexecuted_blocks=1 00:14:50.448 00:14:50.448 ' 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:50.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.448 --rc genhtml_branch_coverage=1 00:14:50.448 --rc genhtml_function_coverage=1 00:14:50.448 --rc genhtml_legend=1 00:14:50.448 --rc geninfo_all_blocks=1 00:14:50.448 --rc geninfo_unexecuted_blocks=1 00:14:50.448 00:14:50.448 ' 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.448 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1725501 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1725501' 00:14:50.449 Process pid: 1725501 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1725501 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1725501 ']' 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.449 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:50.709 [2024-11-27 05:36:38.464764] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:50.709 [2024-11-27 05:36:38.464807] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.709 [2024-11-27 05:36:38.535730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.709 [2024-11-27 05:36:38.575076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.709 [2024-11-27 05:36:38.575117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.709 [2024-11-27 05:36:38.575124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.709 [2024-11-27 05:36:38.575131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.709 [2024-11-27 05:36:38.575137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.709 [2024-11-27 05:36:38.576642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.709 [2024-11-27 05:36:38.576764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.709 [2024-11-27 05:36:38.576800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.709 [2024-11-27 05:36:38.576809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.709 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.709 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:50.709 05:36:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:52.085 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:52.085 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:52.085 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:52.085 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.085 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:52.085 05:36:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:52.344 Malloc1 00:14:52.344 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:52.344 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:52.603 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:52.862 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.862 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:52.862 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:53.122 Malloc2 00:14:53.122 05:36:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:53.381 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:53.381 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:53.641 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:53.641 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:53.641 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:53.641 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:53.641 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:53.641 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:53.641 [2024-11-27 05:36:41.568184] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:14:53.641 [2024-11-27 05:36:41.568216] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725986 ] 00:14:53.641 [2024-11-27 05:36:41.609146] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:53.641 [2024-11-27 05:36:41.615986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.641 [2024-11-27 05:36:41.616012] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8dcb023000 00:14:53.641 [2024-11-27 05:36:41.616986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.617992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.618993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.620001] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.621008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.623674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.624021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.625027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:53.641 [2024-11-27 05:36:41.626039] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:53.641 [2024-11-27 05:36:41.626048] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8dcb018000 00:14:53.641 [2024-11-27 05:36:41.626966] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.641 [2024-11-27 05:36:41.638401] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:53.641 [2024-11-27 05:36:41.638426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:53.641 [2024-11-27 05:36:41.643147] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:53.641 [2024-11-27 05:36:41.643182] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:53.641 [2024-11-27 05:36:41.643255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:53.641 [2024-11-27 05:36:41.643268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:53.641 [2024-11-27 05:36:41.643273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:53.902 [2024-11-27 05:36:41.644145] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:53.902 [2024-11-27 05:36:41.644157] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:53.902 [2024-11-27 05:36:41.644164] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:53.902 [2024-11-27 05:36:41.645151] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:53.902 [2024-11-27 05:36:41.645159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:53.902 [2024-11-27 05:36:41.645166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:53.902 [2024-11-27 05:36:41.646158] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:53.902 [2024-11-27 05:36:41.646168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:53.902 [2024-11-27 05:36:41.647163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:53.902 [2024-11-27 05:36:41.647172] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:53.902 [2024-11-27 05:36:41.647176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:53.902 [2024-11-27 05:36:41.647182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:53.902 [2024-11-27 05:36:41.647289] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:53.902 [2024-11-27 05:36:41.647293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:53.902 [2024-11-27 05:36:41.647298] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:53.902 [2024-11-27 05:36:41.648173] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:53.902 [2024-11-27 05:36:41.649176] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:53.902 [2024-11-27 05:36:41.650184] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:53.902 [2024-11-27 05:36:41.651182] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:53.902 [2024-11-27 05:36:41.651262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:53.902 [2024-11-27 05:36:41.652191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:53.902 [2024-11-27 05:36:41.652198] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:53.902 [2024-11-27 05:36:41.652202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:53.902 [2024-11-27 05:36:41.652218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:53.902 [2024-11-27 05:36:41.652225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:53.902 [2024-11-27 05:36:41.652241] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.902 [2024-11-27 05:36:41.652246] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.902 [2024-11-27 05:36:41.652249] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.902 [2024-11-27 05:36:41.652262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.902 [2024-11-27 05:36:41.652305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:53.902 [2024-11-27 05:36:41.652314] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:53.902 [2024-11-27 05:36:41.652320] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:53.902 [2024-11-27 05:36:41.652324] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:53.902 [2024-11-27 05:36:41.652328] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:53.903 [2024-11-27 05:36:41.652332] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:53.903 [2024-11-27 05:36:41.652336] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:53.903 [2024-11-27 05:36:41.652340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.903 [2024-11-27 05:36:41.652383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.903 [2024-11-27 05:36:41.652390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.903 [2024-11-27 05:36:41.652397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:53.903 [2024-11-27 05:36:41.652401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652429] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:53.903 [2024-11-27 05:36:41.652434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652528] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:53.903 [2024-11-27 05:36:41.652532] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:53.903 [2024-11-27 05:36:41.652535] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.903 [2024-11-27 05:36:41.652540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652563] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:53.903 [2024-11-27 05:36:41.652573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652585] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.903 [2024-11-27 05:36:41.652589] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.903 [2024-11-27 05:36:41.652592] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.903 [2024-11-27 05:36:41.652597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652644] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:53.903 [2024-11-27 05:36:41.652647] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.903 [2024-11-27 05:36:41.652650] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.903 [2024-11-27 05:36:41.652655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652710] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:53.903 [2024-11-27 05:36:41.652714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:53.903 [2024-11-27 05:36:41.652719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:53.903 [2024-11-27 05:36:41.652735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652817] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:53.903 [2024-11-27 05:36:41.652822] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:53.903 [2024-11-27 05:36:41.652825] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:53.903 [2024-11-27 05:36:41.652827] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:53.903 [2024-11-27 05:36:41.652830] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:53.903 [2024-11-27 05:36:41.652836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:53.903 [2024-11-27 05:36:41.652842] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:53.903 [2024-11-27 05:36:41.652846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:53.903 [2024-11-27 05:36:41.652849] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.903 [2024-11-27 05:36:41.652854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652860] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:53.903 [2024-11-27 05:36:41.652863] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:53.903 [2024-11-27 05:36:41.652866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.903 [2024-11-27 05:36:41.652871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652878] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:53.903 [2024-11-27 05:36:41.652882] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:53.903 [2024-11-27 05:36:41.652884] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:53.903 [2024-11-27 05:36:41.652892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:53.903 [2024-11-27 05:36:41.652898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:53.903 [2024-11-27 05:36:41.652923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:53.903 ===================================================== 00:14:53.904 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:53.904 ===================================================== 00:14:53.904 Controller Capabilities/Features 00:14:53.904 ================================ 00:14:53.904 Vendor ID: 4e58 00:14:53.904 Subsystem Vendor ID: 4e58 00:14:53.904 Serial Number: SPDK1 00:14:53.904 Model Number: SPDK bdev Controller 00:14:53.904 Firmware Version: 25.01 00:14:53.904 Recommended Arb Burst: 6 00:14:53.904 IEEE OUI Identifier: 8d 6b 50 00:14:53.904 Multi-path I/O 00:14:53.904 May have multiple subsystem ports: Yes 00:14:53.904 May have multiple controllers: Yes 00:14:53.904 Associated with SR-IOV VF: No 00:14:53.904 Max Data Transfer Size: 131072 00:14:53.904 Max Number of Namespaces: 32 00:14:53.904 Max Number of I/O Queues: 127 00:14:53.904 NVMe Specification Version (VS): 1.3 00:14:53.904 NVMe Specification Version (Identify): 1.3 00:14:53.904 Maximum Queue Entries: 256 00:14:53.904 Contiguous Queues Required: Yes 00:14:53.904 Arbitration Mechanisms Supported 00:14:53.904 Weighted Round Robin: Not Supported 00:14:53.904 Vendor Specific: Not Supported 00:14:53.904 Reset Timeout: 15000 ms 00:14:53.904 Doorbell Stride: 4 bytes 00:14:53.904 NVM Subsystem Reset: Not Supported 00:14:53.904 Command Sets Supported 00:14:53.904 NVM Command Set: Supported 00:14:53.904 Boot Partition: Not Supported 00:14:53.904 Memory Page Size Minimum: 4096 bytes 00:14:53.904 Memory Page Size Maximum: 4096 bytes 00:14:53.904 Persistent Memory Region: Not Supported 00:14:53.904 Optional Asynchronous Events Supported 00:14:53.904 Namespace Attribute Notices: Supported 00:14:53.904 Firmware Activation Notices: Not Supported 00:14:53.904 ANA Change Notices: Not Supported 00:14:53.904 PLE Aggregate Log Change Notices: Not Supported 00:14:53.904 LBA Status Info Alert Notices: Not Supported 00:14:53.904 EGE Aggregate Log Change Notices: Not Supported 00:14:53.904 Normal NVM Subsystem Shutdown event: Not Supported 00:14:53.904 Zone Descriptor Change Notices: Not Supported 00:14:53.904 Discovery Log Change Notices: Not Supported 00:14:53.904 Controller Attributes 00:14:53.904 128-bit Host Identifier: Supported 00:14:53.904 Non-Operational Permissive Mode: Not Supported 00:14:53.904 NVM Sets: Not Supported 00:14:53.904 Read Recovery Levels: Not Supported 00:14:53.904 Endurance Groups: Not Supported 00:14:53.904 Predictable Latency Mode: Not Supported 00:14:53.904 Traffic Based Keep ALive: Not Supported 00:14:53.904 Namespace Granularity: Not Supported 00:14:53.904 SQ Associations: Not Supported 00:14:53.904 UUID List: Not Supported 00:14:53.904 Multi-Domain Subsystem: Not Supported 00:14:53.904 Fixed Capacity Management: Not Supported 00:14:53.904 Variable Capacity Management: Not Supported 00:14:53.904 Delete Endurance Group: Not Supported 00:14:53.904 Delete NVM Set: Not Supported 00:14:53.904 Extended LBA Formats Supported: Not Supported 00:14:53.904 Flexible Data Placement Supported: Not Supported 00:14:53.904 00:14:53.904 Controller Memory Buffer Support 00:14:53.904 ================================ 00:14:53.904 Supported: No 00:14:53.904 00:14:53.904 Persistent Memory Region Support 00:14:53.904 ================================ 00:14:53.904 Supported: No 00:14:53.904 00:14:53.904 Admin Command Set Attributes 00:14:53.904 ============================ 00:14:53.904 Security Send/Receive: Not Supported 00:14:53.904 Format NVM: Not Supported 00:14:53.904 Firmware Activate/Download: Not Supported 00:14:53.904 Namespace Management: Not Supported 00:14:53.904 Device Self-Test: Not Supported 00:14:53.904 Directives: Not Supported 00:14:53.904 NVMe-MI: Not Supported 00:14:53.904 Virtualization Management: Not Supported 00:14:53.904 Doorbell Buffer Config: Not Supported 00:14:53.904 Get LBA Status Capability: Not Supported 00:14:53.904 Command & Feature Lockdown Capability: Not Supported 00:14:53.904 Abort Command Limit: 4 00:14:53.904 Async Event Request Limit: 4 00:14:53.904 Number of Firmware Slots: N/A 00:14:53.904 Firmware Slot 1 Read-Only: N/A 00:14:53.904 Firmware Activation Without Reset: N/A 00:14:53.904 Multiple Update Detection Support: N/A 00:14:53.904 Firmware Update Granularity: No Information Provided 00:14:53.904 Per-Namespace SMART Log: No 00:14:53.904 Asymmetric Namespace Access Log Page: Not Supported 00:14:53.904 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:53.904 Command Effects Log Page: Supported 00:14:53.904 Get Log Page Extended Data: Supported 00:14:53.904 Telemetry Log Pages: Not Supported 00:14:53.904 Persistent Event Log Pages: Not Supported 00:14:53.904 Supported Log Pages Log Page: May Support 00:14:53.904 Commands Supported & Effects Log Page: Not Supported 00:14:53.904 Feature Identifiers & Effects Log Page:May Support 00:14:53.904 NVMe-MI Commands & Effects Log Page: May Support 00:14:53.904 Data Area 4 for Telemetry Log: Not Supported 00:14:53.904 Error Log Page Entries Supported: 128 00:14:53.904 Keep Alive: Supported 00:14:53.904 Keep Alive Granularity: 10000 ms 00:14:53.904 00:14:53.904 NVM Command Set Attributes 00:14:53.904 ========================== 00:14:53.904 Submission Queue Entry Size 00:14:53.904 Max: 64 00:14:53.904 Min: 64 00:14:53.904 Completion Queue Entry Size 00:14:53.904 Max: 16 00:14:53.904 Min: 16 00:14:53.904 Number of Namespaces: 32 00:14:53.904 Compare Command: Supported 00:14:53.904 Write Uncorrectable Command: Not Supported 00:14:53.904 Dataset Management Command: Supported 00:14:53.904 Write Zeroes Command: Supported 00:14:53.904 Set Features Save Field: Not Supported 00:14:53.904 Reservations: Not Supported 00:14:53.904 Timestamp: Not Supported 00:14:53.904 Copy: Supported 00:14:53.904 Volatile Write Cache: Present 00:14:53.904 Atomic Write Unit (Normal): 1 00:14:53.904 Atomic Write Unit (PFail): 1 00:14:53.904 Atomic Compare & Write Unit: 1 00:14:53.904 Fused Compare & Write: Supported 00:14:53.904 Scatter-Gather List 00:14:53.904 SGL Command Set: Supported (Dword aligned) 00:14:53.904 SGL Keyed: Not Supported 00:14:53.904 SGL Bit Bucket Descriptor: Not Supported 00:14:53.904 SGL Metadata Pointer: Not Supported 00:14:53.904 Oversized SGL: Not Supported 00:14:53.904 SGL Metadata Address: Not Supported 00:14:53.904 SGL Offset: Not Supported 00:14:53.904 Transport SGL Data Block: Not Supported 00:14:53.904 Replay Protected Memory Block: Not Supported 00:14:53.904 00:14:53.904 Firmware Slot Information 00:14:53.904 ========================= 00:14:53.904 Active slot: 1 00:14:53.904 Slot 1 Firmware Revision: 25.01 00:14:53.904 00:14:53.904 00:14:53.904 Commands Supported and Effects 00:14:53.904 ============================== 00:14:53.904 Admin Commands 00:14:53.904 -------------- 00:14:53.904 Get Log Page (02h): Supported 00:14:53.904 Identify (06h): Supported 00:14:53.904 Abort (08h): Supported 00:14:53.904 Set Features (09h): Supported 00:14:53.904 Get Features (0Ah): Supported 00:14:53.904 Asynchronous Event Request (0Ch): Supported 00:14:53.904 Keep Alive (18h): Supported 00:14:53.904 I/O Commands 00:14:53.904 ------------ 00:14:53.904 Flush (00h): Supported LBA-Change 00:14:53.904 Write (01h): Supported LBA-Change 00:14:53.904 Read (02h): Supported 00:14:53.904 Compare (05h): Supported 00:14:53.904 Write Zeroes (08h): Supported LBA-Change 00:14:53.904 Dataset Management (09h): Supported LBA-Change 00:14:53.904 Copy (19h): Supported LBA-Change 00:14:53.904 00:14:53.904 Error Log 00:14:53.904 ========= 00:14:53.904 00:14:53.904 Arbitration 00:14:53.904 =========== 00:14:53.904 Arbitration Burst: 1 00:14:53.904 00:14:53.904 Power Management 00:14:53.904 ================ 00:14:53.904 Number of Power States: 1 00:14:53.904 Current Power State: Power State #0 00:14:53.904 Power State #0: 00:14:53.904 Max Power: 0.00 W 00:14:53.904 Non-Operational State: Operational 00:14:53.904 Entry Latency: Not Reported 00:14:53.904 Exit Latency: Not Reported 00:14:53.904 Relative Read Throughput: 0 00:14:53.904 Relative Read Latency: 0 00:14:53.904 Relative Write Throughput: 0 00:14:53.904 Relative Write Latency: 0 00:14:53.904 Idle Power: Not Reported 00:14:53.905 Active Power: Not Reported 00:14:53.905 Non-Operational Permissive Mode: Not Supported 00:14:53.905 00:14:53.905 Health Information 00:14:53.905 ================== 00:14:53.905 Critical Warnings: 00:14:53.905 Available Spare Space: OK 00:14:53.905 Temperature: OK 00:14:53.905 Device Reliability: OK 00:14:53.905 Read Only: No 00:14:53.905 Volatile Memory Backup: OK 00:14:53.905 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:53.905 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:53.905 Available Spare: 0% 00:14:53.905 Available Sp[2024-11-27 05:36:41.653008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:53.905 [2024-11-27 05:36:41.653017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:53.905 [2024-11-27 05:36:41.653042] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:53.905 [2024-11-27 05:36:41.653050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.905 [2024-11-27 05:36:41.653056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.905 [2024-11-27 05:36:41.653061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.905 [2024-11-27 05:36:41.653066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:53.905 [2024-11-27 05:36:41.653202] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:53.905 [2024-11-27 05:36:41.653212] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:53.905 [2024-11-27 05:36:41.654208] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.905 [2024-11-27 05:36:41.654254] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:53.905 [2024-11-27 05:36:41.654260] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:53.905 [2024-11-27 05:36:41.655219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:53.905 [2024-11-27 05:36:41.655229] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:53.905 [2024-11-27 05:36:41.655277] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:53.905 [2024-11-27 05:36:41.657676] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:53.905 are Threshold: 0% 00:14:53.905 Life Percentage Used: 0% 00:14:53.905 Data Units Read: 0 00:14:53.905 Data Units Written: 0 00:14:53.905 Host Read Commands: 0 00:14:53.905 Host Write Commands: 0 00:14:53.905 Controller Busy Time: 0 minutes 00:14:53.905 Power Cycles: 0 00:14:53.905 Power On Hours: 0 hours 00:14:53.905 Unsafe Shutdowns: 0 00:14:53.905 Unrecoverable Media Errors: 0 00:14:53.905 Lifetime Error Log Entries: 0 00:14:53.905 Warning Temperature Time: 0 minutes 00:14:53.905 Critical Temperature Time: 0 minutes 00:14:53.905 00:14:53.905 Number of Queues 00:14:53.905 ================ 00:14:53.905 Number of I/O Submission Queues: 127 00:14:53.905 Number of I/O Completion Queues: 127 00:14:53.905 00:14:53.905 Active Namespaces 00:14:53.905 ================= 00:14:53.905 Namespace ID:1 00:14:53.905 Error Recovery Timeout: Unlimited 00:14:53.905 Command Set Identifier: NVM (00h) 00:14:53.905 Deallocate: Supported 00:14:53.905 Deallocated/Unwritten Error: Not Supported 00:14:53.905 Deallocated Read Value: Unknown 00:14:53.905 Deallocate in Write Zeroes: Not Supported 00:14:53.905 Deallocated Guard Field: 0xFFFF 00:14:53.905 Flush: Supported 00:14:53.905 Reservation: Supported 00:14:53.905 Namespace Sharing Capabilities: Multiple Controllers 00:14:53.905 Size (in LBAs): 131072 (0GiB) 00:14:53.905 Capacity (in LBAs): 131072 (0GiB) 00:14:53.905 Utilization (in LBAs): 131072 (0GiB) 00:14:53.905 NGUID: 391AC271CA574ABA843C16BEAABBA2DD 00:14:53.905 UUID: 391ac271-ca57-4aba-843c-16beaabba2dd 00:14:53.905 Thin Provisioning: Not Supported 00:14:53.905 Per-NS Atomic Units: Yes 00:14:53.905 Atomic Boundary Size (Normal): 0 00:14:53.905 Atomic Boundary Size (PFail): 0 00:14:53.905 Atomic Boundary Offset: 0 00:14:53.905 Maximum Single Source Range Length: 65535 00:14:53.905 Maximum Copy Length: 65535 00:14:53.905 Maximum Source Range Count: 1 00:14:53.905 NGUID/EUI64 Never Reused: No 00:14:53.905 Namespace Write Protected: No 00:14:53.905 Number of LBA Formats: 1 00:14:53.905 Current LBA Format: LBA Format #00 00:14:53.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:53.905 00:14:53.905 05:36:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:53.905 [2024-11-27 05:36:41.895251] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.175 Initializing NVMe Controllers 00:14:59.175 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:59.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:59.175 Initialization complete. Launching workers. 00:14:59.175 ======================================================== 00:14:59.175 Latency(us) 00:14:59.175 Device Information : IOPS MiB/s Average min max 00:14:59.175 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39968.80 156.13 3202.55 958.91 7605.99 00:14:59.175 ======================================================== 00:14:59.175 Total : 39968.80 156.13 3202.55 958.91 7605.99 00:14:59.175 00:14:59.175 [2024-11-27 05:36:46.916331] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.175 05:36:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:59.175 [2024-11-27 05:36:47.153380] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:04.447 Initializing NVMe Controllers 00:15:04.447 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:04.448 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:04.448 Initialization complete. Launching workers. 00:15:04.448 ======================================================== 00:15:04.448 Latency(us) 00:15:04.448 Device Information : IOPS MiB/s Average min max 00:15:04.448 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.32 62.72 7976.82 6967.21 8991.25 00:15:04.448 ======================================================== 00:15:04.448 Total : 16057.32 62.72 7976.82 6967.21 8991.25 00:15:04.448 00:15:04.448 [2024-11-27 05:36:52.194844] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:04.448 05:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:04.448 [2024-11-27 05:36:52.393808] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.721 [2024-11-27 05:36:57.471010] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.721 Initializing NVMe Controllers 00:15:09.721 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.721 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:09.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:09.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:09.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:09.721 Initialization complete. Launching workers. 00:15:09.721 Starting thread on core 2 00:15:09.721 Starting thread on core 3 00:15:09.721 Starting thread on core 1 00:15:09.721 05:36:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:09.980 [2024-11-27 05:36:57.764076] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.275 [2024-11-27 05:37:00.827718] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.275 Initializing NVMe Controllers 00:15:13.275 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.275 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.275 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:13.275 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:13.275 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:13.275 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:13.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:13.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:13.275 Initialization complete. Launching workers. 00:15:13.275 Starting thread on core 1 with urgent priority queue 00:15:13.275 Starting thread on core 2 with urgent priority queue 00:15:13.275 Starting thread on core 3 with urgent priority queue 00:15:13.275 Starting thread on core 0 with urgent priority queue 00:15:13.275 SPDK bdev Controller (SPDK1 ) core 0: 8135.67 IO/s 12.29 secs/100000 ios 00:15:13.275 SPDK bdev Controller (SPDK1 ) core 1: 7616.33 IO/s 13.13 secs/100000 ios 00:15:13.275 SPDK bdev Controller (SPDK1 ) core 2: 8037.33 IO/s 12.44 secs/100000 ios 00:15:13.275 SPDK bdev Controller (SPDK1 ) core 3: 8827.33 IO/s 11.33 secs/100000 ios 00:15:13.275 ======================================================== 00:15:13.275 00:15:13.275 05:37:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:13.275 [2024-11-27 05:37:01.106270] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.275 Initializing NVMe Controllers 00:15:13.275 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.275 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.275 Namespace ID: 1 size: 0GB 00:15:13.275 Initialization complete. 00:15:13.275 INFO: using host memory buffer for IO 00:15:13.275 Hello world! 00:15:13.275 [2024-11-27 05:37:01.140477] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.275 05:37:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:13.535 [2024-11-27 05:37:01.417056] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:14.534 Initializing NVMe Controllers 00:15:14.534 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.534 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:14.534 Initialization complete. Launching workers. 00:15:14.534 submit (in ns) avg, min, max = 8090.8, 3186.7, 4001474.3 00:15:14.534 complete (in ns) avg, min, max = 19373.5, 1757.1, 4000063.8 00:15:14.534 00:15:14.534 Submit histogram 00:15:14.534 ================ 00:15:14.534 Range in us Cumulative Count 00:15:14.534 3.185 - 3.200: 0.1422% ( 23) 00:15:14.534 3.200 - 3.215: 0.8654% ( 117) 00:15:14.534 3.215 - 3.230: 2.8003% ( 313) 00:15:14.534 3.230 - 3.246: 4.8031% ( 324) 00:15:14.534 3.246 - 3.261: 8.1226% ( 537) 00:15:14.534 3.261 - 3.276: 13.6490% ( 894) 00:15:14.534 3.276 - 3.291: 19.6019% ( 963) 00:15:14.534 3.291 - 3.307: 25.9999% ( 1035) 00:15:14.534 3.307 - 3.322: 32.4597% ( 1045) 00:15:14.534 3.322 - 3.337: 38.6413% ( 1000) 00:15:14.534 3.337 - 3.352: 44.0811% ( 880) 00:15:14.534 3.352 - 3.368: 50.2813% ( 1003) 00:15:14.534 3.368 - 3.383: 56.3887% ( 988) 00:15:14.534 3.383 - 3.398: 61.3896% ( 809) 00:15:14.534 3.398 - 3.413: 67.0025% ( 908) 00:15:14.534 3.413 - 3.429: 73.5612% ( 1061) 00:15:14.534 3.429 - 3.444: 77.4247% ( 625) 00:15:14.534 3.444 - 3.459: 81.2264% ( 615) 00:15:14.534 3.459 - 3.474: 84.2369% ( 487) 00:15:14.534 3.474 - 3.490: 86.0914% ( 300) 00:15:14.534 3.490 - 3.505: 87.1113% ( 165) 00:15:14.534 3.505 - 3.520: 87.7048% ( 96) 00:15:14.534 3.520 - 3.535: 88.2240% ( 84) 00:15:14.534 3.535 - 3.550: 88.8607% ( 103) 00:15:14.534 3.550 - 3.566: 89.6211% ( 123) 00:15:14.534 3.566 - 3.581: 90.4618% ( 136) 00:15:14.534 3.581 - 3.596: 91.3396% ( 142) 00:15:14.534 3.596 - 3.611: 92.2853% ( 153) 00:15:14.534 3.611 - 3.627: 93.2126% ( 150) 00:15:14.534 3.627 - 3.642: 94.2882% ( 174) 00:15:14.534 3.642 - 3.657: 95.1845% ( 145) 00:15:14.534 3.657 - 3.672: 96.0623% ( 142) 00:15:14.534 3.672 - 3.688: 96.8721% ( 131) 00:15:14.534 3.688 - 3.703: 97.5521% ( 110) 00:15:14.534 3.703 - 3.718: 98.1146% ( 91) 00:15:14.534 3.718 - 3.733: 98.5350% ( 68) 00:15:14.534 3.733 - 3.749: 98.8626% ( 53) 00:15:14.534 3.749 - 3.764: 99.0418% ( 29) 00:15:14.534 3.764 - 3.779: 99.2644% ( 36) 00:15:14.534 3.779 - 3.794: 99.3695% ( 17) 00:15:14.534 3.794 - 3.810: 99.4560% ( 14) 00:15:14.534 3.810 - 3.825: 99.5240% ( 11) 00:15:14.534 3.825 - 3.840: 99.5735% ( 8) 00:15:14.534 3.840 - 3.855: 99.6044% ( 5) 00:15:14.534 3.855 - 3.870: 99.6106% ( 1) 00:15:14.534 3.870 - 3.886: 99.6167% ( 1) 00:15:14.535 3.886 - 3.901: 99.6229% ( 1) 00:15:14.535 4.023 - 4.053: 99.6291% ( 1) 00:15:14.535 4.084 - 4.114: 99.6353% ( 1) 00:15:14.535 4.114 - 4.145: 99.6415% ( 1) 00:15:14.535 4.175 - 4.206: 99.6476% ( 1) 00:15:14.535 5.181 - 5.211: 99.6538% ( 1) 00:15:14.535 5.211 - 5.242: 99.6662% ( 2) 00:15:14.535 5.242 - 5.272: 99.6724% ( 1) 00:15:14.535 5.303 - 5.333: 99.6786% ( 1) 00:15:14.535 5.333 - 5.364: 99.6847% ( 1) 00:15:14.535 5.364 - 5.394: 99.6909% ( 1) 00:15:14.535 5.425 - 5.455: 99.6971% ( 1) 00:15:14.535 5.547 - 5.577: 99.7033% ( 1) 00:15:14.535 5.669 - 5.699: 99.7095% ( 1) 00:15:14.535 5.851 - 5.882: 99.7156% ( 1) 00:15:14.535 5.943 - 5.973: 99.7218% ( 1) 00:15:14.535 6.004 - 6.034: 99.7280% ( 1) 00:15:14.535 6.156 - 6.187: 99.7342% ( 1) 00:15:14.535 6.278 - 6.309: 99.7404% ( 1) 00:15:14.535 6.430 - 6.461: 99.7527% ( 2) 00:15:14.535 6.613 - 6.644: 99.7589% ( 1) 00:15:14.535 6.796 - 6.827: 99.7651% ( 1) 00:15:14.535 6.827 - 6.857: 99.7713% ( 1) 00:15:14.535 6.888 - 6.918: 99.7775% ( 1) 00:15:14.535 7.010 - 7.040: 99.7836% ( 1) 00:15:14.535 7.101 - 7.131: 99.7960% ( 2) 00:15:14.535 7.375 - 7.406: 99.8022% ( 1) 00:15:14.535 7.863 - 7.924: 99.8084% ( 1) 00:15:14.535 7.924 - 7.985: 99.8207% ( 2) 00:15:14.535 8.046 - 8.107: 99.8269% ( 1) 00:15:14.535 8.107 - 8.168: 99.8331% ( 1) 00:15:14.535 8.290 - 8.350: 99.8393% ( 1) 00:15:14.535 8.594 - 8.655: 99.8455% ( 1) 00:15:14.535 [2024-11-27 05:37:02.437008] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:14.535 8.960 - 9.021: 99.8516% ( 1) 00:15:14.535 9.082 - 9.143: 99.8578% ( 1) 00:15:14.535 9.570 - 9.630: 99.8640% ( 1) 00:15:14.535 10.057 - 10.118: 99.8702% ( 1) 00:15:14.535 10.240 - 10.301: 99.8764% ( 1) 00:15:14.535 59.246 - 59.490: 99.8825% ( 1) 00:15:14.535 3994.575 - 4025.783: 100.0000% ( 19) 00:15:14.535 00:15:14.535 Complete histogram 00:15:14.535 ================== 00:15:14.535 Range in us Cumulative Count 00:15:14.535 1.752 - 1.760: 0.0062% ( 1) 00:15:14.535 1.760 - 1.768: 0.7047% ( 113) 00:15:14.535 1.768 - 1.775: 5.5140% ( 778) 00:15:14.535 1.775 - 1.783: 14.8915% ( 1517) 00:15:14.535 1.783 - 1.790: 21.0484% ( 996) 00:15:14.535 1.790 - 1.798: 23.2120% ( 350) 00:15:14.535 1.798 - 1.806: 24.3617% ( 186) 00:15:14.535 1.806 - 1.813: 25.3879% ( 166) 00:15:14.535 1.813 - 1.821: 27.7307% ( 379) 00:15:14.535 1.821 - 1.829: 40.6132% ( 2084) 00:15:14.535 1.829 - 1.836: 65.0862% ( 3959) 00:15:14.535 1.836 - 1.844: 83.6249% ( 2999) 00:15:14.535 1.844 - 1.851: 91.0243% ( 1197) 00:15:14.535 1.851 - 1.859: 93.8926% ( 464) 00:15:14.535 1.859 - 1.867: 95.7223% ( 296) 00:15:14.535 1.867 - 1.874: 96.4518% ( 118) 00:15:14.535 1.874 - 1.882: 96.8041% ( 57) 00:15:14.535 1.882 - 1.890: 97.0390% ( 38) 00:15:14.535 1.890 - 1.897: 97.4532% ( 67) 00:15:14.535 1.897 - 1.905: 97.9662% ( 83) 00:15:14.535 1.905 - 1.912: 98.5164% ( 89) 00:15:14.535 1.912 - 1.920: 98.8688% ( 57) 00:15:14.535 1.920 - 1.928: 99.0604% ( 31) 00:15:14.535 1.928 - 1.935: 99.1469% ( 14) 00:15:14.535 1.935 - 1.943: 99.1717% ( 4) 00:15:14.535 1.943 - 1.950: 99.1840% ( 2) 00:15:14.535 1.950 - 1.966: 99.1964% ( 2) 00:15:14.535 1.966 - 1.981: 99.2026% ( 1) 00:15:14.535 2.027 - 2.042: 99.2088% ( 1) 00:15:14.535 2.042 - 2.057: 99.2149% ( 1) 00:15:14.535 2.057 - 2.072: 99.2582% ( 7) 00:15:14.535 2.072 - 2.088: 99.3138% ( 9) 00:15:14.535 2.088 - 2.103: 99.3447% ( 5) 00:15:14.535 2.133 - 2.149: 99.3509% ( 1) 00:15:14.535 3.566 - 3.581: 99.3571% ( 1) 00:15:14.535 3.688 - 3.703: 99.3695% ( 2) 00:15:14.535 3.718 - 3.733: 99.3757% ( 1) 00:15:14.535 3.764 - 3.779: 99.3818% ( 1) 00:15:14.535 3.779 - 3.794: 99.3942% ( 2) 00:15:14.535 3.840 - 3.855: 99.4004% ( 1) 00:15:14.535 3.886 - 3.901: 99.4066% ( 1) 00:15:14.535 3.931 - 3.962: 99.4127% ( 1) 00:15:14.535 3.962 - 3.992: 99.4251% ( 2) 00:15:14.535 4.023 - 4.053: 99.4313% ( 1) 00:15:14.535 4.297 - 4.328: 99.4375% ( 1) 00:15:14.535 4.510 - 4.541: 99.4437% ( 1) 00:15:14.535 4.754 - 4.785: 99.4498% ( 1) 00:15:14.535 5.120 - 5.150: 99.4560% ( 1) 00:15:14.535 5.150 - 5.181: 99.4622% ( 1) 00:15:14.535 5.242 - 5.272: 99.4684% ( 1) 00:15:14.535 5.669 - 5.699: 99.4746% ( 1) 00:15:14.535 5.912 - 5.943: 99.4807% ( 1) 00:15:14.535 6.065 - 6.095: 99.4869% ( 1) 00:15:14.535 6.126 - 6.156: 99.4931% ( 1) 00:15:14.535 6.430 - 6.461: 99.4993% ( 1) 00:15:14.535 6.461 - 6.491: 99.5055% ( 1) 00:15:14.535 6.552 - 6.583: 99.5117% ( 1) 00:15:14.535 7.619 - 7.650: 99.5178% ( 1) 00:15:14.535 7.680 - 7.710: 99.5240% ( 1) 00:15:14.535 7.710 - 7.741: 99.5302% ( 1) 00:15:14.535 8.533 - 8.594: 99.5364% ( 1) 00:15:14.535 9.204 - 9.265: 99.5426% ( 1) 00:15:14.535 13.349 - 13.410: 99.5487% ( 1) 00:15:14.535 14.141 - 14.202: 99.5549% ( 1) 00:15:14.535 141.410 - 142.385: 99.5611% ( 1) 00:15:14.535 3994.575 - 4025.783: 100.0000% ( 71) 00:15:14.535 00:15:14.535 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:14.535 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:14.535 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:14.535 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:14.535 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:14.813 [ 00:15:14.813 { 00:15:14.813 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:14.813 "subtype": "Discovery", 00:15:14.813 "listen_addresses": [], 00:15:14.813 "allow_any_host": true, 00:15:14.813 "hosts": [] 00:15:14.813 }, 00:15:14.813 { 00:15:14.813 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:14.813 "subtype": "NVMe", 00:15:14.813 "listen_addresses": [ 00:15:14.813 { 00:15:14.813 "trtype": "VFIOUSER", 00:15:14.813 "adrfam": "IPv4", 00:15:14.813 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:14.813 "trsvcid": "0" 00:15:14.813 } 00:15:14.813 ], 00:15:14.813 "allow_any_host": true, 00:15:14.813 "hosts": [], 00:15:14.813 "serial_number": "SPDK1", 00:15:14.813 "model_number": "SPDK bdev Controller", 00:15:14.813 "max_namespaces": 32, 00:15:14.813 "min_cntlid": 1, 00:15:14.813 "max_cntlid": 65519, 00:15:14.813 "namespaces": [ 00:15:14.813 { 00:15:14.813 "nsid": 1, 00:15:14.813 "bdev_name": "Malloc1", 00:15:14.813 "name": "Malloc1", 00:15:14.813 "nguid": "391AC271CA574ABA843C16BEAABBA2DD", 00:15:14.813 "uuid": "391ac271-ca57-4aba-843c-16beaabba2dd" 00:15:14.813 } 00:15:14.813 ] 00:15:14.813 }, 00:15:14.813 { 00:15:14.813 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:14.813 "subtype": "NVMe", 00:15:14.813 "listen_addresses": [ 00:15:14.813 { 00:15:14.813 "trtype": "VFIOUSER", 00:15:14.813 "adrfam": "IPv4", 00:15:14.813 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:14.813 "trsvcid": "0" 00:15:14.813 } 00:15:14.813 ], 00:15:14.813 "allow_any_host": true, 00:15:14.813 "hosts": [], 00:15:14.813 "serial_number": "SPDK2", 00:15:14.813 "model_number": "SPDK bdev Controller", 00:15:14.813 "max_namespaces": 32, 00:15:14.813 "min_cntlid": 1, 00:15:14.813 "max_cntlid": 65519, 00:15:14.813 "namespaces": [ 00:15:14.813 { 00:15:14.813 "nsid": 1, 00:15:14.813 "bdev_name": "Malloc2", 00:15:14.813 "name": "Malloc2", 00:15:14.813 "nguid": "F2894408A8D145DA9CE961AB30692E4D", 00:15:14.813 "uuid": "f2894408-a8d1-45da-9ce9-61ab30692e4d" 00:15:14.813 } 00:15:14.813 ] 00:15:14.813 } 00:15:14.813 ] 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1729440 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:14.813 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:15.103 [2024-11-27 05:37:02.845075] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:15.103 Malloc3 00:15:15.103 05:37:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:15.103 [2024-11-27 05:37:03.095937] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:15.396 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:15.396 Asynchronous Event Request test 00:15:15.396 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.396 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:15.396 Registering asynchronous event callbacks... 00:15:15.396 Starting namespace attribute notice tests for all controllers... 00:15:15.396 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:15.396 aer_cb - Changed Namespace 00:15:15.396 Cleaning up... 00:15:15.396 [ 00:15:15.396 { 00:15:15.396 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:15.396 "subtype": "Discovery", 00:15:15.396 "listen_addresses": [], 00:15:15.396 "allow_any_host": true, 00:15:15.396 "hosts": [] 00:15:15.396 }, 00:15:15.396 { 00:15:15.396 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:15.396 "subtype": "NVMe", 00:15:15.396 "listen_addresses": [ 00:15:15.396 { 00:15:15.396 "trtype": "VFIOUSER", 00:15:15.396 "adrfam": "IPv4", 00:15:15.396 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:15.396 "trsvcid": "0" 00:15:15.396 } 00:15:15.396 ], 00:15:15.396 "allow_any_host": true, 00:15:15.396 "hosts": [], 00:15:15.396 "serial_number": "SPDK1", 00:15:15.396 "model_number": "SPDK bdev Controller", 00:15:15.396 "max_namespaces": 32, 00:15:15.396 "min_cntlid": 1, 00:15:15.396 "max_cntlid": 65519, 00:15:15.396 "namespaces": [ 00:15:15.396 { 00:15:15.396 "nsid": 1, 00:15:15.396 "bdev_name": "Malloc1", 00:15:15.396 "name": "Malloc1", 00:15:15.396 "nguid": "391AC271CA574ABA843C16BEAABBA2DD", 00:15:15.396 "uuid": "391ac271-ca57-4aba-843c-16beaabba2dd" 00:15:15.396 }, 00:15:15.396 { 00:15:15.396 "nsid": 2, 00:15:15.396 "bdev_name": "Malloc3", 00:15:15.396 "name": "Malloc3", 00:15:15.396 "nguid": "2BC8DD1E8BD2496FA2C70FC5FB22DB12", 00:15:15.396 "uuid": "2bc8dd1e-8bd2-496f-a2c7-0fc5fb22db12" 00:15:15.396 } 00:15:15.396 ] 00:15:15.396 }, 00:15:15.396 { 00:15:15.396 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:15.396 "subtype": "NVMe", 00:15:15.396 "listen_addresses": [ 00:15:15.396 { 00:15:15.396 "trtype": "VFIOUSER", 00:15:15.396 "adrfam": "IPv4", 00:15:15.396 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:15.396 "trsvcid": "0" 00:15:15.396 } 00:15:15.396 ], 00:15:15.396 "allow_any_host": true, 00:15:15.396 "hosts": [], 00:15:15.396 "serial_number": "SPDK2", 00:15:15.396 "model_number": "SPDK bdev Controller", 00:15:15.396 "max_namespaces": 32, 00:15:15.396 "min_cntlid": 1, 00:15:15.396 "max_cntlid": 65519, 00:15:15.396 "namespaces": [ 00:15:15.396 { 00:15:15.396 "nsid": 1, 00:15:15.396 "bdev_name": "Malloc2", 00:15:15.396 "name": "Malloc2", 00:15:15.396 "nguid": "F2894408A8D145DA9CE961AB30692E4D", 00:15:15.396 "uuid": "f2894408-a8d1-45da-9ce9-61ab30692e4d" 00:15:15.396 } 00:15:15.396 ] 00:15:15.396 } 00:15:15.396 ] 00:15:15.396 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1729440 00:15:15.396 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.396 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:15.396 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:15.396 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:15.396 [2024-11-27 05:37:03.331776] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:15.396 [2024-11-27 05:37:03.331821] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729668 ] 00:15:15.396 [2024-11-27 05:37:03.371079] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:15.396 [2024-11-27 05:37:03.376324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.396 [2024-11-27 05:37:03.376349] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fde4c14a000 00:15:15.396 [2024-11-27 05:37:03.377323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.378326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.379340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.380351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.381358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.382362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.383366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.384377] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:15.396 [2024-11-27 05:37:03.385384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:15.396 [2024-11-27 05:37:03.385394] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fde4c13f000 00:15:15.396 [2024-11-27 05:37:03.386311] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:15.658 [2024-11-27 05:37:03.395684] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:15.658 [2024-11-27 05:37:03.395710] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:15.658 [2024-11-27 05:37:03.400782] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:15.658 [2024-11-27 05:37:03.400820] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:15.658 [2024-11-27 05:37:03.400889] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:15.658 [2024-11-27 05:37:03.400903] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:15.658 [2024-11-27 05:37:03.400908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:15.658 [2024-11-27 05:37:03.401786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:15.658 [2024-11-27 05:37:03.401798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:15.658 [2024-11-27 05:37:03.401805] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:15.658 [2024-11-27 05:37:03.402792] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:15.658 [2024-11-27 05:37:03.402800] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:15.658 [2024-11-27 05:37:03.402806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:15.658 [2024-11-27 05:37:03.403799] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:15.658 [2024-11-27 05:37:03.403807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:15.658 [2024-11-27 05:37:03.404808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:15.658 [2024-11-27 05:37:03.404815] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:15.658 [2024-11-27 05:37:03.404822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:15.658 [2024-11-27 05:37:03.404829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:15.658 [2024-11-27 05:37:03.404937] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:15.658 [2024-11-27 05:37:03.404941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:15.658 [2024-11-27 05:37:03.404946] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:15.658 [2024-11-27 05:37:03.405817] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:15.658 [2024-11-27 05:37:03.406827] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:15.658 [2024-11-27 05:37:03.407840] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:15.658 [2024-11-27 05:37:03.408837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:15.658 [2024-11-27 05:37:03.408877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:15.658 [2024-11-27 05:37:03.409850] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:15.658 [2024-11-27 05:37:03.409859] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:15.658 [2024-11-27 05:37:03.409864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:15.658 [2024-11-27 05:37:03.409880] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:15.658 [2024-11-27 05:37:03.409890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:15.658 [2024-11-27 05:37:03.409905] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.658 [2024-11-27 05:37:03.409909] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.659 [2024-11-27 05:37:03.409913] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:15.659 [2024-11-27 05:37:03.409923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.418677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.418688] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:15.659 [2024-11-27 05:37:03.418693] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:15.659 [2024-11-27 05:37:03.418696] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:15.659 [2024-11-27 05:37:03.418701] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:15.659 [2024-11-27 05:37:03.418705] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:15.659 [2024-11-27 05:37:03.418712] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:15.659 [2024-11-27 05:37:03.418716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.418723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.418732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.426676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.426689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.659 [2024-11-27 05:37:03.426696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.659 [2024-11-27 05:37:03.426703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.659 [2024-11-27 05:37:03.426710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.659 [2024-11-27 05:37:03.426714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.426723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.426731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.434674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.434682] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:15.659 [2024-11-27 05:37:03.434687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.434697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.434703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.434710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.442674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.442729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.442736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.442743] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:15.659 [2024-11-27 05:37:03.442747] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:15.659 [2024-11-27 05:37:03.442751] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:15.659 [2024-11-27 05:37:03.442757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.450674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.450688] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:15.659 [2024-11-27 05:37:03.450695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.450702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.450708] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.659 [2024-11-27 05:37:03.450712] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.659 [2024-11-27 05:37:03.450715] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:15.659 [2024-11-27 05:37:03.450720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.458674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.458687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.458694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.458700] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:15.659 [2024-11-27 05:37:03.458704] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.659 [2024-11-27 05:37:03.458707] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:15.659 [2024-11-27 05:37:03.458713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.466674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.466686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.466692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.466698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.466703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.466708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.466712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.466716] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:15.659 [2024-11-27 05:37:03.466720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:15.659 [2024-11-27 05:37:03.466725] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:15.659 [2024-11-27 05:37:03.466743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.474674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.474687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.482675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.482687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.490674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.490685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.498676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:15.659 [2024-11-27 05:37:03.498692] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:15.659 [2024-11-27 05:37:03.498696] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:15.659 [2024-11-27 05:37:03.498700] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:15.659 [2024-11-27 05:37:03.498702] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:15.659 [2024-11-27 05:37:03.498705] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:15.659 [2024-11-27 05:37:03.498711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:15.659 [2024-11-27 05:37:03.498717] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:15.659 [2024-11-27 05:37:03.498721] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:15.659 [2024-11-27 05:37:03.498724] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:15.659 [2024-11-27 05:37:03.498729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.498735] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:15.659 [2024-11-27 05:37:03.498739] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:15.659 [2024-11-27 05:37:03.498742] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:15.659 [2024-11-27 05:37:03.498747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:15.659 [2024-11-27 05:37:03.498754] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:15.660 [2024-11-27 05:37:03.498757] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:15.660 [2024-11-27 05:37:03.498760] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:15.660 [2024-11-27 05:37:03.498766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:15.660 [2024-11-27 05:37:03.506675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:15.660 [2024-11-27 05:37:03.506688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:15.660 [2024-11-27 05:37:03.506698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:15.660 [2024-11-27 05:37:03.506706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:15.660 ===================================================== 00:15:15.660 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:15.660 ===================================================== 00:15:15.660 Controller Capabilities/Features 00:15:15.660 ================================ 00:15:15.660 Vendor ID: 4e58 00:15:15.660 Subsystem Vendor ID: 4e58 00:15:15.660 Serial Number: SPDK2 00:15:15.660 Model Number: SPDK bdev Controller 00:15:15.660 Firmware Version: 25.01 00:15:15.660 Recommended Arb Burst: 6 00:15:15.660 IEEE OUI Identifier: 8d 6b 50 00:15:15.660 Multi-path I/O 00:15:15.660 May have multiple subsystem ports: Yes 00:15:15.660 May have multiple controllers: Yes 00:15:15.660 Associated with SR-IOV VF: No 00:15:15.660 Max Data Transfer Size: 131072 00:15:15.660 Max Number of Namespaces: 32 00:15:15.660 Max Number of I/O Queues: 127 00:15:15.660 NVMe Specification Version (VS): 1.3 00:15:15.660 NVMe Specification Version (Identify): 1.3 00:15:15.660 Maximum Queue Entries: 256 00:15:15.660 Contiguous Queues Required: Yes 00:15:15.660 Arbitration Mechanisms Supported 00:15:15.660 Weighted Round Robin: Not Supported 00:15:15.660 Vendor Specific: Not Supported 00:15:15.660 Reset Timeout: 15000 ms 00:15:15.660 Doorbell Stride: 4 bytes 00:15:15.660 NVM Subsystem Reset: Not Supported 00:15:15.660 Command Sets Supported 00:15:15.660 NVM Command Set: Supported 00:15:15.660 Boot Partition: Not Supported 00:15:15.660 Memory Page Size Minimum: 4096 bytes 00:15:15.660 Memory Page Size Maximum: 4096 bytes 00:15:15.660 Persistent Memory Region: Not Supported 00:15:15.660 Optional Asynchronous Events Supported 00:15:15.660 Namespace Attribute Notices: Supported 00:15:15.660 Firmware Activation Notices: Not Supported 00:15:15.660 ANA Change Notices: Not Supported 00:15:15.660 PLE Aggregate Log Change Notices: Not Supported 00:15:15.660 LBA Status Info Alert Notices: Not Supported 00:15:15.660 EGE Aggregate Log Change Notices: Not Supported 00:15:15.660 Normal NVM Subsystem Shutdown event: Not Supported 00:15:15.660 Zone Descriptor Change Notices: Not Supported 00:15:15.660 Discovery Log Change Notices: Not Supported 00:15:15.660 Controller Attributes 00:15:15.660 128-bit Host Identifier: Supported 00:15:15.660 Non-Operational Permissive Mode: Not Supported 00:15:15.660 NVM Sets: Not Supported 00:15:15.660 Read Recovery Levels: Not Supported 00:15:15.660 Endurance Groups: Not Supported 00:15:15.660 Predictable Latency Mode: Not Supported 00:15:15.660 Traffic Based Keep ALive: Not Supported 00:15:15.660 Namespace Granularity: Not Supported 00:15:15.660 SQ Associations: Not Supported 00:15:15.660 UUID List: Not Supported 00:15:15.660 Multi-Domain Subsystem: Not Supported 00:15:15.660 Fixed Capacity Management: Not Supported 00:15:15.660 Variable Capacity Management: Not Supported 00:15:15.660 Delete Endurance Group: Not Supported 00:15:15.660 Delete NVM Set: Not Supported 00:15:15.660 Extended LBA Formats Supported: Not Supported 00:15:15.660 Flexible Data Placement Supported: Not Supported 00:15:15.660 00:15:15.660 Controller Memory Buffer Support 00:15:15.660 ================================ 00:15:15.660 Supported: No 00:15:15.660 00:15:15.660 Persistent Memory Region Support 00:15:15.660 ================================ 00:15:15.660 Supported: No 00:15:15.660 00:15:15.660 Admin Command Set Attributes 00:15:15.660 ============================ 00:15:15.660 Security Send/Receive: Not Supported 00:15:15.660 Format NVM: Not Supported 00:15:15.660 Firmware Activate/Download: Not Supported 00:15:15.660 Namespace Management: Not Supported 00:15:15.660 Device Self-Test: Not Supported 00:15:15.660 Directives: Not Supported 00:15:15.660 NVMe-MI: Not Supported 00:15:15.660 Virtualization Management: Not Supported 00:15:15.660 Doorbell Buffer Config: Not Supported 00:15:15.660 Get LBA Status Capability: Not Supported 00:15:15.660 Command & Feature Lockdown Capability: Not Supported 00:15:15.660 Abort Command Limit: 4 00:15:15.660 Async Event Request Limit: 4 00:15:15.660 Number of Firmware Slots: N/A 00:15:15.660 Firmware Slot 1 Read-Only: N/A 00:15:15.660 Firmware Activation Without Reset: N/A 00:15:15.660 Multiple Update Detection Support: N/A 00:15:15.660 Firmware Update Granularity: No Information Provided 00:15:15.660 Per-Namespace SMART Log: No 00:15:15.660 Asymmetric Namespace Access Log Page: Not Supported 00:15:15.660 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:15.660 Command Effects Log Page: Supported 00:15:15.660 Get Log Page Extended Data: Supported 00:15:15.660 Telemetry Log Pages: Not Supported 00:15:15.660 Persistent Event Log Pages: Not Supported 00:15:15.660 Supported Log Pages Log Page: May Support 00:15:15.660 Commands Supported & Effects Log Page: Not Supported 00:15:15.660 Feature Identifiers & Effects Log Page:May Support 00:15:15.660 NVMe-MI Commands & Effects Log Page: May Support 00:15:15.660 Data Area 4 for Telemetry Log: Not Supported 00:15:15.660 Error Log Page Entries Supported: 128 00:15:15.660 Keep Alive: Supported 00:15:15.660 Keep Alive Granularity: 10000 ms 00:15:15.660 00:15:15.660 NVM Command Set Attributes 00:15:15.660 ========================== 00:15:15.660 Submission Queue Entry Size 00:15:15.660 Max: 64 00:15:15.660 Min: 64 00:15:15.660 Completion Queue Entry Size 00:15:15.660 Max: 16 00:15:15.660 Min: 16 00:15:15.660 Number of Namespaces: 32 00:15:15.660 Compare Command: Supported 00:15:15.660 Write Uncorrectable Command: Not Supported 00:15:15.660 Dataset Management Command: Supported 00:15:15.660 Write Zeroes Command: Supported 00:15:15.660 Set Features Save Field: Not Supported 00:15:15.660 Reservations: Not Supported 00:15:15.660 Timestamp: Not Supported 00:15:15.660 Copy: Supported 00:15:15.660 Volatile Write Cache: Present 00:15:15.660 Atomic Write Unit (Normal): 1 00:15:15.660 Atomic Write Unit (PFail): 1 00:15:15.660 Atomic Compare & Write Unit: 1 00:15:15.660 Fused Compare & Write: Supported 00:15:15.660 Scatter-Gather List 00:15:15.660 SGL Command Set: Supported (Dword aligned) 00:15:15.660 SGL Keyed: Not Supported 00:15:15.660 SGL Bit Bucket Descriptor: Not Supported 00:15:15.660 SGL Metadata Pointer: Not Supported 00:15:15.660 Oversized SGL: Not Supported 00:15:15.660 SGL Metadata Address: Not Supported 00:15:15.660 SGL Offset: Not Supported 00:15:15.660 Transport SGL Data Block: Not Supported 00:15:15.660 Replay Protected Memory Block: Not Supported 00:15:15.660 00:15:15.660 Firmware Slot Information 00:15:15.660 ========================= 00:15:15.660 Active slot: 1 00:15:15.660 Slot 1 Firmware Revision: 25.01 00:15:15.660 00:15:15.660 00:15:15.660 Commands Supported and Effects 00:15:15.660 ============================== 00:15:15.660 Admin Commands 00:15:15.660 -------------- 00:15:15.660 Get Log Page (02h): Supported 00:15:15.660 Identify (06h): Supported 00:15:15.660 Abort (08h): Supported 00:15:15.660 Set Features (09h): Supported 00:15:15.660 Get Features (0Ah): Supported 00:15:15.660 Asynchronous Event Request (0Ch): Supported 00:15:15.660 Keep Alive (18h): Supported 00:15:15.660 I/O Commands 00:15:15.660 ------------ 00:15:15.660 Flush (00h): Supported LBA-Change 00:15:15.660 Write (01h): Supported LBA-Change 00:15:15.660 Read (02h): Supported 00:15:15.660 Compare (05h): Supported 00:15:15.660 Write Zeroes (08h): Supported LBA-Change 00:15:15.660 Dataset Management (09h): Supported LBA-Change 00:15:15.660 Copy (19h): Supported LBA-Change 00:15:15.660 00:15:15.660 Error Log 00:15:15.660 ========= 00:15:15.660 00:15:15.660 Arbitration 00:15:15.660 =========== 00:15:15.660 Arbitration Burst: 1 00:15:15.660 00:15:15.660 Power Management 00:15:15.660 ================ 00:15:15.660 Number of Power States: 1 00:15:15.660 Current Power State: Power State #0 00:15:15.660 Power State #0: 00:15:15.660 Max Power: 0.00 W 00:15:15.660 Non-Operational State: Operational 00:15:15.660 Entry Latency: Not Reported 00:15:15.661 Exit Latency: Not Reported 00:15:15.661 Relative Read Throughput: 0 00:15:15.661 Relative Read Latency: 0 00:15:15.661 Relative Write Throughput: 0 00:15:15.661 Relative Write Latency: 0 00:15:15.661 Idle Power: Not Reported 00:15:15.661 Active Power: Not Reported 00:15:15.661 Non-Operational Permissive Mode: Not Supported 00:15:15.661 00:15:15.661 Health Information 00:15:15.661 ================== 00:15:15.661 Critical Warnings: 00:15:15.661 Available Spare Space: OK 00:15:15.661 Temperature: OK 00:15:15.661 Device Reliability: OK 00:15:15.661 Read Only: No 00:15:15.661 Volatile Memory Backup: OK 00:15:15.661 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:15.661 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:15.661 Available Spare: 0% 00:15:15.661 Available Sp[2024-11-27 05:37:03.506794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:15.661 [2024-11-27 05:37:03.514675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:15.661 [2024-11-27 05:37:03.514705] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:15.661 [2024-11-27 05:37:03.514714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.661 [2024-11-27 05:37:03.514719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.661 [2024-11-27 05:37:03.514725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.661 [2024-11-27 05:37:03.514730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.661 [2024-11-27 05:37:03.514786] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:15.661 [2024-11-27 05:37:03.514797] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:15.661 [2024-11-27 05:37:03.515796] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:15.661 [2024-11-27 05:37:03.515838] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:15.661 [2024-11-27 05:37:03.515845] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:15.661 [2024-11-27 05:37:03.516805] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:15.661 [2024-11-27 05:37:03.516816] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:15.661 [2024-11-27 05:37:03.516863] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:15.661 [2024-11-27 05:37:03.517827] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:15.661 are Threshold: 0% 00:15:15.661 Life Percentage Used: 0% 00:15:15.661 Data Units Read: 0 00:15:15.661 Data Units Written: 0 00:15:15.661 Host Read Commands: 0 00:15:15.661 Host Write Commands: 0 00:15:15.661 Controller Busy Time: 0 minutes 00:15:15.661 Power Cycles: 0 00:15:15.661 Power On Hours: 0 hours 00:15:15.661 Unsafe Shutdowns: 0 00:15:15.661 Unrecoverable Media Errors: 0 00:15:15.661 Lifetime Error Log Entries: 0 00:15:15.661 Warning Temperature Time: 0 minutes 00:15:15.661 Critical Temperature Time: 0 minutes 00:15:15.661 00:15:15.661 Number of Queues 00:15:15.661 ================ 00:15:15.661 Number of I/O Submission Queues: 127 00:15:15.661 Number of I/O Completion Queues: 127 00:15:15.661 00:15:15.661 Active Namespaces 00:15:15.661 ================= 00:15:15.661 Namespace ID:1 00:15:15.661 Error Recovery Timeout: Unlimited 00:15:15.661 Command Set Identifier: NVM (00h) 00:15:15.661 Deallocate: Supported 00:15:15.661 Deallocated/Unwritten Error: Not Supported 00:15:15.661 Deallocated Read Value: Unknown 00:15:15.661 Deallocate in Write Zeroes: Not Supported 00:15:15.661 Deallocated Guard Field: 0xFFFF 00:15:15.661 Flush: Supported 00:15:15.661 Reservation: Supported 00:15:15.661 Namespace Sharing Capabilities: Multiple Controllers 00:15:15.661 Size (in LBAs): 131072 (0GiB) 00:15:15.661 Capacity (in LBAs): 131072 (0GiB) 00:15:15.661 Utilization (in LBAs): 131072 (0GiB) 00:15:15.661 NGUID: F2894408A8D145DA9CE961AB30692E4D 00:15:15.661 UUID: f2894408-a8d1-45da-9ce9-61ab30692e4d 00:15:15.661 Thin Provisioning: Not Supported 00:15:15.661 Per-NS Atomic Units: Yes 00:15:15.661 Atomic Boundary Size (Normal): 0 00:15:15.661 Atomic Boundary Size (PFail): 0 00:15:15.661 Atomic Boundary Offset: 0 00:15:15.661 Maximum Single Source Range Length: 65535 00:15:15.661 Maximum Copy Length: 65535 00:15:15.661 Maximum Source Range Count: 1 00:15:15.661 NGUID/EUI64 Never Reused: No 00:15:15.661 Namespace Write Protected: No 00:15:15.661 Number of LBA Formats: 1 00:15:15.661 Current LBA Format: LBA Format #00 00:15:15.661 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:15.661 00:15:15.661 05:37:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:15.921 [2024-11-27 05:37:03.746854] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.197 Initializing NVMe Controllers 00:15:21.197 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.197 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:21.197 Initialization complete. Launching workers. 00:15:21.197 ======================================================== 00:15:21.197 Latency(us) 00:15:21.197 Device Information : IOPS MiB/s Average min max 00:15:21.197 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39934.31 155.99 3205.10 969.35 10472.82 00:15:21.197 ======================================================== 00:15:21.197 Total : 39934.31 155.99 3205.10 969.35 10472.82 00:15:21.197 00:15:21.197 [2024-11-27 05:37:08.854920] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.197 05:37:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:21.197 [2024-11-27 05:37:09.095662] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.471 Initializing NVMe Controllers 00:15:26.471 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.471 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:26.471 Initialization complete. Launching workers. 00:15:26.471 ======================================================== 00:15:26.471 Latency(us) 00:15:26.471 Device Information : IOPS MiB/s Average min max 00:15:26.471 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39915.24 155.92 3206.63 975.39 6641.48 00:15:26.471 ======================================================== 00:15:26.471 Total : 39915.24 155.92 3206.63 975.39 6641.48 00:15:26.471 00:15:26.471 [2024-11-27 05:37:14.115994] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.471 05:37:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:26.471 [2024-11-27 05:37:14.330026] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.751 [2024-11-27 05:37:19.466779] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.751 Initializing NVMe Controllers 00:15:31.751 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.751 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:31.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:31.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:31.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:31.751 Initialization complete. Launching workers. 00:15:31.751 Starting thread on core 2 00:15:31.751 Starting thread on core 3 00:15:31.751 Starting thread on core 1 00:15:31.751 05:37:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:32.011 [2024-11-27 05:37:19.763135] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.303 [2024-11-27 05:37:22.829978] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.303 Initializing NVMe Controllers 00:15:35.303 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.303 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.303 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:35.303 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:35.303 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:35.303 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:35.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:35.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:35.303 Initialization complete. Launching workers. 00:15:35.303 Starting thread on core 1 with urgent priority queue 00:15:35.303 Starting thread on core 2 with urgent priority queue 00:15:35.303 Starting thread on core 3 with urgent priority queue 00:15:35.303 Starting thread on core 0 with urgent priority queue 00:15:35.303 SPDK bdev Controller (SPDK2 ) core 0: 8386.33 IO/s 11.92 secs/100000 ios 00:15:35.303 SPDK bdev Controller (SPDK2 ) core 1: 9381.33 IO/s 10.66 secs/100000 ios 00:15:35.303 SPDK bdev Controller (SPDK2 ) core 2: 8814.33 IO/s 11.35 secs/100000 ios 00:15:35.303 SPDK bdev Controller (SPDK2 ) core 3: 9044.00 IO/s 11.06 secs/100000 ios 00:15:35.303 ======================================================== 00:15:35.303 00:15:35.303 05:37:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:35.303 [2024-11-27 05:37:23.117922] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.303 Initializing NVMe Controllers 00:15:35.303 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.303 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.303 Namespace ID: 1 size: 0GB 00:15:35.303 Initialization complete. 00:15:35.303 INFO: using host memory buffer for IO 00:15:35.303 Hello world! 00:15:35.304 [2024-11-27 05:37:23.131015] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.304 05:37:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:35.563 [2024-11-27 05:37:23.407383] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:36.501 Initializing NVMe Controllers 00:15:36.501 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.501 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:36.501 Initialization complete. Launching workers. 00:15:36.501 submit (in ns) avg, min, max = 6592.1, 3184.8, 4001833.3 00:15:36.501 complete (in ns) avg, min, max = 20654.9, 1718.1, 4000764.8 00:15:36.501 00:15:36.501 Submit histogram 00:15:36.501 ================ 00:15:36.501 Range in us Cumulative Count 00:15:36.501 3.185 - 3.200: 0.0981% ( 16) 00:15:36.501 3.200 - 3.215: 0.3311% ( 38) 00:15:36.501 3.215 - 3.230: 0.8462% ( 84) 00:15:36.501 3.230 - 3.246: 1.9070% ( 173) 00:15:36.501 3.246 - 3.261: 4.3782% ( 403) 00:15:36.501 3.261 - 3.276: 10.4673% ( 993) 00:15:36.501 3.276 - 3.291: 16.5318% ( 989) 00:15:36.501 3.291 - 3.307: 22.5901% ( 988) 00:15:36.501 3.307 - 3.322: 29.0716% ( 1057) 00:15:36.501 3.322 - 3.337: 34.7621% ( 928) 00:15:36.501 3.337 - 3.352: 40.1153% ( 873) 00:15:36.501 3.352 - 3.368: 46.2902% ( 1007) 00:15:36.501 3.368 - 3.383: 52.1217% ( 951) 00:15:36.501 3.383 - 3.398: 56.9107% ( 781) 00:15:36.501 3.398 - 3.413: 62.8158% ( 963) 00:15:36.501 3.413 - 3.429: 70.7199% ( 1289) 00:15:36.501 3.429 - 3.444: 75.4231% ( 767) 00:15:36.501 3.444 - 3.459: 80.1692% ( 774) 00:15:36.501 3.459 - 3.474: 83.8178% ( 595) 00:15:36.501 3.474 - 3.490: 85.8474% ( 331) 00:15:36.501 3.490 - 3.505: 87.0186% ( 191) 00:15:36.501 3.505 - 3.520: 87.5399% ( 85) 00:15:36.501 3.520 - 3.535: 87.9078% ( 60) 00:15:36.501 3.535 - 3.550: 88.3186% ( 67) 00:15:36.501 3.550 - 3.566: 88.9073% ( 96) 00:15:36.501 3.566 - 3.581: 89.7842% ( 143) 00:15:36.501 3.581 - 3.596: 90.7285% ( 154) 00:15:36.501 3.596 - 3.611: 91.8322% ( 180) 00:15:36.501 3.611 - 3.627: 92.7827% ( 155) 00:15:36.501 3.627 - 3.642: 93.7147% ( 152) 00:15:36.501 3.642 - 3.657: 94.5793% ( 141) 00:15:36.501 3.657 - 3.672: 95.6156% ( 169) 00:15:36.501 3.672 - 3.688: 96.4557% ( 137) 00:15:36.501 3.688 - 3.703: 97.3633% ( 148) 00:15:36.501 3.703 - 3.718: 97.9274% ( 92) 00:15:36.501 3.718 - 3.733: 98.4118% ( 79) 00:15:36.501 3.733 - 3.749: 98.6755% ( 43) 00:15:36.501 3.749 - 3.764: 98.9637% ( 47) 00:15:36.501 3.764 - 3.779: 99.2028% ( 39) 00:15:36.501 3.779 - 3.794: 99.3745% ( 28) 00:15:36.501 3.794 - 3.810: 99.4910% ( 19) 00:15:36.501 3.810 - 3.825: 99.5278% ( 6) 00:15:36.501 3.825 - 3.840: 99.5524% ( 4) 00:15:36.501 3.840 - 3.855: 99.5892% ( 6) 00:15:36.501 3.855 - 3.870: 99.6198% ( 5) 00:15:36.501 3.870 - 3.886: 99.6260% ( 1) 00:15:36.501 3.992 - 4.023: 99.6321% ( 1) 00:15:36.501 4.084 - 4.114: 99.6382% ( 1) 00:15:36.501 5.303 - 5.333: 99.6443% ( 1) 00:15:36.501 5.333 - 5.364: 99.6566% ( 2) 00:15:36.501 5.394 - 5.425: 99.6689% ( 2) 00:15:36.501 5.516 - 5.547: 99.6811% ( 2) 00:15:36.501 5.547 - 5.577: 99.6873% ( 1) 00:15:36.501 5.577 - 5.608: 99.6934% ( 1) 00:15:36.501 5.973 - 6.004: 99.6995% ( 1) 00:15:36.501 6.309 - 6.339: 99.7057% ( 1) 00:15:36.501 6.400 - 6.430: 99.7118% ( 1) 00:15:36.501 6.430 - 6.461: 99.7179% ( 1) 00:15:36.501 6.461 - 6.491: 99.7241% ( 1) 00:15:36.501 6.552 - 6.583: 99.7363% ( 2) 00:15:36.501 6.613 - 6.644: 99.7486% ( 2) 00:15:36.501 6.888 - 6.918: 99.7609% ( 2) 00:15:36.501 6.918 - 6.949: 99.7670% ( 1) 00:15:36.501 6.979 - 7.010: 99.7731% ( 1) 00:15:36.501 7.070 - 7.101: 99.7792% ( 1) 00:15:36.501 7.253 - 7.284: 99.7854% ( 1) 00:15:36.501 7.375 - 7.406: 99.7976% ( 2) 00:15:36.501 7.406 - 7.436: 99.8099% ( 2) 00:15:36.501 7.467 - 7.497: 99.8222% ( 2) 00:15:36.501 7.741 - 7.771: 99.8283% ( 1) 00:15:36.501 7.771 - 7.802: 99.8344% ( 1) 00:15:36.501 8.168 - 8.229: 99.8467% ( 2) 00:15:36.501 8.229 - 8.290: 99.8528% ( 1) 00:15:36.501 8.350 - 8.411: 99.8651% ( 2) 00:15:36.501 8.533 - 8.594: 99.8712% ( 1) 00:15:36.501 8.594 - 8.655: 99.8774% ( 1) 00:15:36.501 8.716 - 8.777: 99.8958% ( 3) 00:15:36.501 8.777 - 8.838: 99.9019% ( 1) 00:15:36.501 8.838 - 8.899: 99.9080% ( 1) 00:15:36.761 [2024-11-27 05:37:24.508654] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:36.761 10.240 - 10.301: 99.9142% ( 1) 00:15:36.761 16.701 - 16.823: 99.9203% ( 1) 00:15:36.761 3994.575 - 4025.783: 100.0000% ( 13) 00:15:36.761 00:15:36.761 Complete histogram 00:15:36.761 ================== 00:15:36.761 Range in us Cumulative Count 00:15:36.761 1.714 - 1.722: 0.0061% ( 1) 00:15:36.761 1.730 - 1.737: 0.0123% ( 1) 00:15:36.761 1.752 - 1.760: 0.0184% ( 1) 00:15:36.761 1.760 - 1.768: 0.0613% ( 7) 00:15:36.761 1.768 - 1.775: 0.2882% ( 37) 00:15:36.761 1.775 - 1.783: 0.7542% ( 76) 00:15:36.761 1.783 - 1.790: 1.6188% ( 141) 00:15:36.761 1.790 - 1.798: 2.6183% ( 163) 00:15:36.761 1.798 - 1.806: 3.3664% ( 122) 00:15:36.761 1.806 - 1.813: 7.5668% ( 685) 00:15:36.761 1.813 - 1.821: 30.3225% ( 3711) 00:15:36.761 1.821 - 1.829: 64.9191% ( 5642) 00:15:36.761 1.829 - 1.836: 84.2961% ( 3160) 00:15:36.761 1.836 - 1.844: 90.6304% ( 1033) 00:15:36.761 1.844 - 1.851: 93.6534% ( 493) 00:15:36.761 1.851 - 1.859: 95.5973% ( 317) 00:15:36.761 1.859 - 1.867: 96.4741% ( 143) 00:15:36.761 1.867 - 1.874: 96.8052% ( 54) 00:15:36.761 1.874 - 1.882: 97.0567% ( 41) 00:15:36.761 1.882 - 1.890: 97.4000% ( 56) 00:15:36.761 1.890 - 1.897: 97.9029% ( 82) 00:15:36.761 1.897 - 1.905: 98.4057% ( 82) 00:15:36.761 1.905 - 1.912: 98.7920% ( 63) 00:15:36.761 1.912 - 1.920: 99.0434% ( 41) 00:15:36.761 1.920 - 1.928: 99.1783% ( 22) 00:15:36.761 1.928 - 1.935: 99.2212% ( 7) 00:15:36.761 1.935 - 1.943: 99.2396% ( 3) 00:15:36.761 1.943 - 1.950: 99.2458% ( 1) 00:15:36.761 1.966 - 1.981: 99.2519% ( 1) 00:15:36.761 2.042 - 2.057: 99.2580% ( 1) 00:15:36.761 2.149 - 2.164: 99.2642% ( 1) 00:15:36.761 2.179 - 2.194: 99.2703% ( 1) 00:15:36.761 3.840 - 3.855: 99.2764% ( 1) 00:15:36.761 3.870 - 3.886: 99.2826% ( 1) 00:15:36.761 4.023 - 4.053: 99.2887% ( 1) 00:15:36.761 4.084 - 4.114: 99.2948% ( 1) 00:15:36.761 4.450 - 4.480: 99.3010% ( 1) 00:15:36.761 4.602 - 4.632: 99.3071% ( 1) 00:15:36.761 4.815 - 4.846: 99.3132% ( 1) 00:15:36.761 4.968 - 4.998: 99.3194% ( 1) 00:15:36.761 5.029 - 5.059: 99.3255% ( 1) 00:15:36.761 5.090 - 5.120: 99.3439% ( 3) 00:15:36.761 5.211 - 5.242: 99.3500% ( 1) 00:15:36.761 5.272 - 5.303: 99.3561% ( 1) 00:15:36.761 5.364 - 5.394: 99.3623% ( 1) 00:15:36.761 5.425 - 5.455: 99.3684% ( 1) 00:15:36.761 5.455 - 5.486: 99.3745% ( 1) 00:15:36.761 5.516 - 5.547: 99.3807% ( 1) 00:15:36.761 5.608 - 5.638: 99.3868% ( 1) 00:15:36.761 5.699 - 5.730: 99.3929% ( 1) 00:15:36.761 5.730 - 5.760: 99.3991% ( 1) 00:15:36.761 5.821 - 5.851: 99.4052% ( 1) 00:15:36.761 5.851 - 5.882: 99.4113% ( 1) 00:15:36.761 5.943 - 5.973: 99.4175% ( 1) 00:15:36.761 6.156 - 6.187: 99.4236% ( 1) 00:15:36.761 6.370 - 6.400: 99.4297% ( 1) 00:15:36.761 6.461 - 6.491: 99.4359% ( 1) 00:15:36.761 6.552 - 6.583: 99.4420% ( 1) 00:15:36.761 6.735 - 6.766: 99.4481% ( 1) 00:15:36.761 6.827 - 6.857: 99.4604% ( 2) 00:15:36.761 6.888 - 6.918: 99.4665% ( 1) 00:15:36.761 7.040 - 7.070: 99.4727% ( 1) 00:15:36.761 7.070 - 7.101: 99.4788% ( 1) 00:15:36.761 7.284 - 7.314: 99.4849% ( 1) 00:15:36.761 7.985 - 8.046: 99.4910% ( 1) 00:15:36.761 8.168 - 8.229: 99.4972% ( 1) 00:15:36.761 8.229 - 8.290: 99.5033% ( 1) 00:15:36.761 8.594 - 8.655: 99.5094% ( 1) 00:15:36.761 12.190 - 12.251: 99.5156% ( 1) 00:15:36.761 14.019 - 14.080: 99.5217% ( 1) 00:15:36.761 17.798 - 17.920: 99.5278% ( 1) 00:15:36.761 3198.781 - 3214.385: 99.5340% ( 1) 00:15:36.761 3994.575 - 4025.783: 100.0000% ( 76) 00:15:36.761 00:15:36.761 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:36.761 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:36.761 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:36.761 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:36.761 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:36.761 [ 00:15:36.761 { 00:15:36.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:36.761 "subtype": "Discovery", 00:15:36.761 "listen_addresses": [], 00:15:36.761 "allow_any_host": true, 00:15:36.761 "hosts": [] 00:15:36.761 }, 00:15:36.761 { 00:15:36.761 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:36.761 "subtype": "NVMe", 00:15:36.761 "listen_addresses": [ 00:15:36.761 { 00:15:36.761 "trtype": "VFIOUSER", 00:15:36.761 "adrfam": "IPv4", 00:15:36.761 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:36.761 "trsvcid": "0" 00:15:36.761 } 00:15:36.761 ], 00:15:36.761 "allow_any_host": true, 00:15:36.761 "hosts": [], 00:15:36.761 "serial_number": "SPDK1", 00:15:36.762 "model_number": "SPDK bdev Controller", 00:15:36.762 "max_namespaces": 32, 00:15:36.762 "min_cntlid": 1, 00:15:36.762 "max_cntlid": 65519, 00:15:36.762 "namespaces": [ 00:15:36.762 { 00:15:36.762 "nsid": 1, 00:15:36.762 "bdev_name": "Malloc1", 00:15:36.762 "name": "Malloc1", 00:15:36.762 "nguid": "391AC271CA574ABA843C16BEAABBA2DD", 00:15:36.762 "uuid": "391ac271-ca57-4aba-843c-16beaabba2dd" 00:15:36.762 }, 00:15:36.762 { 00:15:36.762 "nsid": 2, 00:15:36.762 "bdev_name": "Malloc3", 00:15:36.762 "name": "Malloc3", 00:15:36.762 "nguid": "2BC8DD1E8BD2496FA2C70FC5FB22DB12", 00:15:36.762 "uuid": "2bc8dd1e-8bd2-496f-a2c7-0fc5fb22db12" 00:15:36.762 } 00:15:36.762 ] 00:15:36.762 }, 00:15:36.762 { 00:15:36.762 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:36.762 "subtype": "NVMe", 00:15:36.762 "listen_addresses": [ 00:15:36.762 { 00:15:36.762 "trtype": "VFIOUSER", 00:15:36.762 "adrfam": "IPv4", 00:15:36.762 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:36.762 "trsvcid": "0" 00:15:36.762 } 00:15:36.762 ], 00:15:36.762 "allow_any_host": true, 00:15:36.762 "hosts": [], 00:15:36.762 "serial_number": "SPDK2", 00:15:36.762 "model_number": "SPDK bdev Controller", 00:15:36.762 "max_namespaces": 32, 00:15:36.762 "min_cntlid": 1, 00:15:36.762 "max_cntlid": 65519, 00:15:36.762 "namespaces": [ 00:15:36.762 { 00:15:36.762 "nsid": 1, 00:15:36.762 "bdev_name": "Malloc2", 00:15:36.762 "name": "Malloc2", 00:15:36.762 "nguid": "F2894408A8D145DA9CE961AB30692E4D", 00:15:36.762 "uuid": "f2894408-a8d1-45da-9ce9-61ab30692e4d" 00:15:36.762 } 00:15:36.762 ] 00:15:36.762 } 00:15:36.762 ] 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1733125 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:36.762 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:37.021 [2024-11-27 05:37:24.896105] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:37.021 Malloc4 00:15:37.021 05:37:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:37.280 [2024-11-27 05:37:25.137948] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:37.280 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:37.280 Asynchronous Event Request test 00:15:37.280 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.280 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:37.280 Registering asynchronous event callbacks... 00:15:37.280 Starting namespace attribute notice tests for all controllers... 00:15:37.280 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:37.280 aer_cb - Changed Namespace 00:15:37.280 Cleaning up... 00:15:37.539 [ 00:15:37.539 { 00:15:37.539 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:37.539 "subtype": "Discovery", 00:15:37.539 "listen_addresses": [], 00:15:37.539 "allow_any_host": true, 00:15:37.539 "hosts": [] 00:15:37.539 }, 00:15:37.539 { 00:15:37.539 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:37.539 "subtype": "NVMe", 00:15:37.539 "listen_addresses": [ 00:15:37.539 { 00:15:37.539 "trtype": "VFIOUSER", 00:15:37.539 "adrfam": "IPv4", 00:15:37.539 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:37.539 "trsvcid": "0" 00:15:37.539 } 00:15:37.539 ], 00:15:37.539 "allow_any_host": true, 00:15:37.539 "hosts": [], 00:15:37.539 "serial_number": "SPDK1", 00:15:37.539 "model_number": "SPDK bdev Controller", 00:15:37.539 "max_namespaces": 32, 00:15:37.539 "min_cntlid": 1, 00:15:37.539 "max_cntlid": 65519, 00:15:37.539 "namespaces": [ 00:15:37.539 { 00:15:37.539 "nsid": 1, 00:15:37.539 "bdev_name": "Malloc1", 00:15:37.539 "name": "Malloc1", 00:15:37.539 "nguid": "391AC271CA574ABA843C16BEAABBA2DD", 00:15:37.539 "uuid": "391ac271-ca57-4aba-843c-16beaabba2dd" 00:15:37.539 }, 00:15:37.539 { 00:15:37.539 "nsid": 2, 00:15:37.539 "bdev_name": "Malloc3", 00:15:37.539 "name": "Malloc3", 00:15:37.539 "nguid": "2BC8DD1E8BD2496FA2C70FC5FB22DB12", 00:15:37.539 "uuid": "2bc8dd1e-8bd2-496f-a2c7-0fc5fb22db12" 00:15:37.539 } 00:15:37.539 ] 00:15:37.539 }, 00:15:37.539 { 00:15:37.539 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:37.539 "subtype": "NVMe", 00:15:37.539 "listen_addresses": [ 00:15:37.539 { 00:15:37.539 "trtype": "VFIOUSER", 00:15:37.539 "adrfam": "IPv4", 00:15:37.539 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:37.539 "trsvcid": "0" 00:15:37.539 } 00:15:37.539 ], 00:15:37.539 "allow_any_host": true, 00:15:37.539 "hosts": [], 00:15:37.539 "serial_number": "SPDK2", 00:15:37.539 "model_number": "SPDK bdev Controller", 00:15:37.539 "max_namespaces": 32, 00:15:37.539 "min_cntlid": 1, 00:15:37.539 "max_cntlid": 65519, 00:15:37.539 "namespaces": [ 00:15:37.539 { 00:15:37.539 "nsid": 1, 00:15:37.539 "bdev_name": "Malloc2", 00:15:37.539 "name": "Malloc2", 00:15:37.539 "nguid": "F2894408A8D145DA9CE961AB30692E4D", 00:15:37.539 "uuid": "f2894408-a8d1-45da-9ce9-61ab30692e4d" 00:15:37.539 }, 00:15:37.539 { 00:15:37.539 "nsid": 2, 00:15:37.539 "bdev_name": "Malloc4", 00:15:37.539 "name": "Malloc4", 00:15:37.539 "nguid": "EB9BB16E8B7F41738DE12E6811AB1568", 00:15:37.539 "uuid": "eb9bb16e-8b7f-4173-8de1-2e6811ab1568" 00:15:37.539 } 00:15:37.539 ] 00:15:37.539 } 00:15:37.539 ] 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1733125 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1725501 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1725501 ']' 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1725501 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1725501 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1725501' 00:15:37.539 killing process with pid 1725501 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1725501 00:15:37.539 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1725501 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1733362 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1733362' 00:15:37.798 Process pid: 1733362 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1733362 00:15:37.798 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1733362 ']' 00:15:37.799 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.799 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.799 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.799 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.799 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.799 [2024-11-27 05:37:25.723327] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:37.799 [2024-11-27 05:37:25.724188] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:37.799 [2024-11-27 05:37:25.724224] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.799 [2024-11-27 05:37:25.795298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.058 [2024-11-27 05:37:25.837499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.058 [2024-11-27 05:37:25.837540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.058 [2024-11-27 05:37:25.837547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.058 [2024-11-27 05:37:25.837553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.058 [2024-11-27 05:37:25.837558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.058 [2024-11-27 05:37:25.838983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.058 [2024-11-27 05:37:25.839096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.058 [2024-11-27 05:37:25.839201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.058 [2024-11-27 05:37:25.839202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.058 [2024-11-27 05:37:25.908246] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:38.058 [2024-11-27 05:37:25.909203] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:38.058 [2024-11-27 05:37:25.909265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:38.058 [2024-11-27 05:37:25.909484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:38.058 [2024-11-27 05:37:25.909540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:38.058 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.058 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:38.058 05:37:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:38.996 05:37:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:39.255 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:39.255 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:39.255 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.255 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:39.255 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:39.514 Malloc1 00:15:39.514 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:39.773 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:39.773 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:40.032 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.032 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:40.032 05:37:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:40.290 Malloc2 00:15:40.290 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:40.549 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1733362 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1733362 ']' 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1733362 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.807 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1733362 00:15:41.066 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.066 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.066 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1733362' 00:15:41.066 killing process with pid 1733362 00:15:41.066 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1733362 00:15:41.066 05:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1733362 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:41.066 00:15:41.066 real 0m50.814s 00:15:41.066 user 3m16.662s 00:15:41.066 sys 0m3.266s 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:41.066 ************************************ 00:15:41.066 END TEST nvmf_vfio_user 00:15:41.066 ************************************ 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.066 05:37:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.326 ************************************ 00:15:41.326 START TEST nvmf_vfio_user_nvme_compliance 00:15:41.326 ************************************ 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:41.326 * Looking for test storage... 00:15:41.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.326 --rc genhtml_branch_coverage=1 00:15:41.326 --rc genhtml_function_coverage=1 00:15:41.326 --rc genhtml_legend=1 00:15:41.326 --rc geninfo_all_blocks=1 00:15:41.326 --rc geninfo_unexecuted_blocks=1 00:15:41.326 00:15:41.326 ' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.326 --rc genhtml_branch_coverage=1 00:15:41.326 --rc genhtml_function_coverage=1 00:15:41.326 --rc genhtml_legend=1 00:15:41.326 --rc geninfo_all_blocks=1 00:15:41.326 --rc geninfo_unexecuted_blocks=1 00:15:41.326 00:15:41.326 ' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.326 --rc genhtml_branch_coverage=1 00:15:41.326 --rc genhtml_function_coverage=1 00:15:41.326 --rc genhtml_legend=1 00:15:41.326 --rc geninfo_all_blocks=1 00:15:41.326 --rc geninfo_unexecuted_blocks=1 00:15:41.326 00:15:41.326 ' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.326 --rc genhtml_branch_coverage=1 00:15:41.326 --rc genhtml_function_coverage=1 00:15:41.326 --rc genhtml_legend=1 00:15:41.326 --rc geninfo_all_blocks=1 00:15:41.326 --rc geninfo_unexecuted_blocks=1 00:15:41.326 00:15:41.326 ' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:41.326 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1733914 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1733914' 00:15:41.327 Process pid: 1733914 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1733914 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1733914 ']' 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.327 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.585 [2024-11-27 05:37:29.336655] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:15:41.585 [2024-11-27 05:37:29.336708] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.585 [2024-11-27 05:37:29.414216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.585 [2024-11-27 05:37:29.455903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.585 [2024-11-27 05:37:29.455942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.585 [2024-11-27 05:37:29.455949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.585 [2024-11-27 05:37:29.455954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.585 [2024-11-27 05:37:29.455959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.585 [2024-11-27 05:37:29.457260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.585 [2024-11-27 05:37:29.457370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.585 [2024-11-27 05:37:29.457370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.585 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.585 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:41.585 05:37:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.962 malloc0 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.962 05:37:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:42.962 00:15:42.962 00:15:42.962 CUnit - A unit testing framework for C - Version 2.1-3 00:15:42.962 http://cunit.sourceforge.net/ 00:15:42.962 00:15:42.962 00:15:42.962 Suite: nvme_compliance 00:15:42.962 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-27 05:37:30.786153] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.962 [2024-11-27 05:37:30.787492] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:42.962 [2024-11-27 05:37:30.787507] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:42.962 [2024-11-27 05:37:30.787513] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:42.962 [2024-11-27 05:37:30.790180] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.962 passed 00:15:42.962 Test: admin_identify_ctrlr_verify_fused ...[2024-11-27 05:37:30.868758] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.962 [2024-11-27 05:37:30.871775] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.962 passed 00:15:42.962 Test: admin_identify_ns ...[2024-11-27 05:37:30.947929] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.221 [2024-11-27 05:37:31.011685] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:43.222 [2024-11-27 05:37:31.019685] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:43.222 [2024-11-27 05:37:31.040775] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.222 passed 00:15:43.222 Test: admin_get_features_mandatory_features ...[2024-11-27 05:37:31.114560] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.222 [2024-11-27 05:37:31.117579] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.222 passed 00:15:43.222 Test: admin_get_features_optional_features ...[2024-11-27 05:37:31.194116] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.222 [2024-11-27 05:37:31.197132] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.481 passed 00:15:43.481 Test: admin_set_features_number_of_queues ...[2024-11-27 05:37:31.273861] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.481 [2024-11-27 05:37:31.377759] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.481 passed 00:15:43.481 Test: admin_get_log_page_mandatory_logs ...[2024-11-27 05:37:31.453123] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.481 [2024-11-27 05:37:31.456148] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.740 passed 00:15:43.740 Test: admin_get_log_page_with_lpo ...[2024-11-27 05:37:31.530776] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.740 [2024-11-27 05:37:31.599681] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:43.740 [2024-11-27 05:37:31.612737] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.740 passed 00:15:43.740 Test: fabric_property_get ...[2024-11-27 05:37:31.686373] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.740 [2024-11-27 05:37:31.687608] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:43.740 [2024-11-27 05:37:31.689401] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.740 passed 00:15:43.999 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-27 05:37:31.765901] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.999 [2024-11-27 05:37:31.767127] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:43.999 [2024-11-27 05:37:31.768916] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.999 passed 00:15:43.999 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-27 05:37:31.844893] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.999 [2024-11-27 05:37:31.931677] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.999 [2024-11-27 05:37:31.946689] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.999 [2024-11-27 05:37:31.951759] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.999 passed 00:15:44.257 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-27 05:37:32.025526] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.257 [2024-11-27 05:37:32.026770] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:44.257 [2024-11-27 05:37:32.028554] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.257 passed 00:15:44.257 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-27 05:37:32.105141] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.257 [2024-11-27 05:37:32.181682] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:44.257 [2024-11-27 05:37:32.205681] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:44.257 [2024-11-27 05:37:32.210761] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.257 passed 00:15:44.517 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-27 05:37:32.284323] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.517 [2024-11-27 05:37:32.285570] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:44.517 [2024-11-27 05:37:32.285595] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:44.517 [2024-11-27 05:37:32.287344] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.517 passed 00:15:44.517 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-27 05:37:32.363955] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.517 [2024-11-27 05:37:32.456682] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:44.517 [2024-11-27 05:37:32.464675] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:44.517 [2024-11-27 05:37:32.472696] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:44.517 [2024-11-27 05:37:32.480679] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:44.517 [2024-11-27 05:37:32.509759] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.775 passed 00:15:44.775 Test: admin_create_io_sq_verify_pc ...[2024-11-27 05:37:32.583515] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.775 [2024-11-27 05:37:32.598683] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:44.775 [2024-11-27 05:37:32.616686] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:44.775 passed 00:15:44.775 Test: admin_create_io_qp_max_qps ...[2024-11-27 05:37:32.694174] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.156 [2024-11-27 05:37:33.791681] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:46.415 [2024-11-27 05:37:34.170929] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.415 passed 00:15:46.415 Test: admin_create_io_sq_shared_cq ...[2024-11-27 05:37:34.244845] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:46.415 [2024-11-27 05:37:34.380673] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:46.415 [2024-11-27 05:37:34.417727] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:46.675 passed 00:15:46.675 00:15:46.675 Run Summary: Type Total Ran Passed Failed Inactive 00:15:46.675 suites 1 1 n/a 0 0 00:15:46.675 tests 18 18 18 0 0 00:15:46.675 asserts 360 360 360 0 n/a 00:15:46.675 00:15:46.675 Elapsed time = 1.491 seconds 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1733914 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1733914 ']' 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1733914 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1733914 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1733914' 00:15:46.675 killing process with pid 1733914 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1733914 00:15:46.675 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1733914 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:46.935 00:15:46.935 real 0m5.603s 00:15:46.935 user 0m15.679s 00:15:46.935 sys 0m0.501s 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.935 ************************************ 00:15:46.935 END TEST nvmf_vfio_user_nvme_compliance 00:15:46.935 ************************************ 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.935 ************************************ 00:15:46.935 START TEST nvmf_vfio_user_fuzz 00:15:46.935 ************************************ 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.935 * Looking for test storage... 00:15:46.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:46.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.935 --rc genhtml_branch_coverage=1 00:15:46.935 --rc genhtml_function_coverage=1 00:15:46.935 --rc genhtml_legend=1 00:15:46.935 --rc geninfo_all_blocks=1 00:15:46.935 --rc geninfo_unexecuted_blocks=1 00:15:46.935 00:15:46.935 ' 00:15:46.935 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:46.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.935 --rc genhtml_branch_coverage=1 00:15:46.935 --rc genhtml_function_coverage=1 00:15:46.935 --rc genhtml_legend=1 00:15:46.935 --rc geninfo_all_blocks=1 00:15:46.936 --rc geninfo_unexecuted_blocks=1 00:15:46.936 00:15:46.936 ' 00:15:46.936 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.936 --rc genhtml_branch_coverage=1 00:15:46.936 --rc genhtml_function_coverage=1 00:15:46.936 --rc genhtml_legend=1 00:15:46.936 --rc geninfo_all_blocks=1 00:15:46.936 --rc geninfo_unexecuted_blocks=1 00:15:46.936 00:15:46.936 ' 00:15:46.936 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:46.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.936 --rc genhtml_branch_coverage=1 00:15:46.936 --rc genhtml_function_coverage=1 00:15:46.936 --rc genhtml_legend=1 00:15:46.936 --rc geninfo_all_blocks=1 00:15:46.936 --rc geninfo_unexecuted_blocks=1 00:15:46.936 00:15:46.936 ' 00:15:46.936 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.936 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.195 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1734903 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1734903' 00:15:47.196 Process pid: 1734903 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1734903 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1734903 ']' 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.196 05:37:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.455 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.455 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:47.455 05:37:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.393 malloc0 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:48.393 05:37:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:20.498 Fuzzing completed. Shutting down the fuzz application 00:16:20.498 00:16:20.498 Dumping successful admin opcodes: 00:16:20.498 9, 10, 00:16:20.498 Dumping successful io opcodes: 00:16:20.498 0, 00:16:20.498 NS: 0x20000081ef00 I/O qp, Total commands completed: 1076838, total successful commands: 4247, random_seed: 585048000 00:16:20.498 NS: 0x20000081ef00 admin qp, Total commands completed: 222000, total successful commands: 51, random_seed: 3137817152 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1734903 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1734903 ']' 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1734903 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1734903 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1734903' 00:16:20.498 killing process with pid 1734903 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1734903 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1734903 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:20.498 00:16:20.498 real 0m32.233s 00:16:20.498 user 0m35.002s 00:16:20.498 sys 0m26.297s 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.498 05:38:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.498 ************************************ 00:16:20.498 END TEST nvmf_vfio_user_fuzz 00:16:20.498 ************************************ 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.498 ************************************ 00:16:20.498 START TEST nvmf_auth_target 00:16:20.498 ************************************ 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:20.498 * Looking for test storage... 00:16:20.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.498 --rc genhtml_branch_coverage=1 00:16:20.498 --rc genhtml_function_coverage=1 00:16:20.498 --rc genhtml_legend=1 00:16:20.498 --rc geninfo_all_blocks=1 00:16:20.498 --rc geninfo_unexecuted_blocks=1 00:16:20.498 00:16:20.498 ' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.498 --rc genhtml_branch_coverage=1 00:16:20.498 --rc genhtml_function_coverage=1 00:16:20.498 --rc genhtml_legend=1 00:16:20.498 --rc geninfo_all_blocks=1 00:16:20.498 --rc geninfo_unexecuted_blocks=1 00:16:20.498 00:16:20.498 ' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.498 --rc genhtml_branch_coverage=1 00:16:20.498 --rc genhtml_function_coverage=1 00:16:20.498 --rc genhtml_legend=1 00:16:20.498 --rc geninfo_all_blocks=1 00:16:20.498 --rc geninfo_unexecuted_blocks=1 00:16:20.498 00:16:20.498 ' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.498 --rc genhtml_branch_coverage=1 00:16:20.498 --rc genhtml_function_coverage=1 00:16:20.498 --rc genhtml_legend=1 00:16:20.498 --rc geninfo_all_blocks=1 00:16:20.498 --rc geninfo_unexecuted_blocks=1 00:16:20.498 00:16:20.498 ' 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.498 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:20.499 05:38:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.769 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:25.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:25.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:25.770 Found net devices under 0000:86:00.0: cvl_0_0 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:25.770 Found net devices under 0000:86:00.1: cvl_0_1 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.770 05:38:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:25.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:16:25.770 00:16:25.770 --- 10.0.0.2 ping statistics --- 00:16:25.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.770 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:16:25.770 00:16:25.770 --- 10.0.0.1 ping statistics --- 00:16:25.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.770 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1743315 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1743315 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1743315 ']' 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1743435 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4ebb447e0bdfaf7354879c121599ce5fcc1321511dc31ff7 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.OTa 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4ebb447e0bdfaf7354879c121599ce5fcc1321511dc31ff7 0 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4ebb447e0bdfaf7354879c121599ce5fcc1321511dc31ff7 0 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4ebb447e0bdfaf7354879c121599ce5fcc1321511dc31ff7 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.OTa 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.OTa 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.OTa 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9fd26000f67c80b3e63116746ef81f29230b7f537f2b219b7a9595c315ca13ad 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H7L 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9fd26000f67c80b3e63116746ef81f29230b7f537f2b219b7a9595c315ca13ad 3 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9fd26000f67c80b3e63116746ef81f29230b7f537f2b219b7a9595c315ca13ad 3 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9fd26000f67c80b3e63116746ef81f29230b7f537f2b219b7a9595c315ca13ad 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H7L 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H7L 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.H7L 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7f3f269882d24df3dd44e2132a6073aa 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.IRS 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7f3f269882d24df3dd44e2132a6073aa 1 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7f3f269882d24df3dd44e2132a6073aa 1 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7f3f269882d24df3dd44e2132a6073aa 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.IRS 00:16:25.770 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.IRS 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.IRS 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=777e2b62d8f31f0f05ea7527548295cbbbae1ac9c835e88f 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.I6l 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 777e2b62d8f31f0f05ea7527548295cbbbae1ac9c835e88f 2 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 777e2b62d8f31f0f05ea7527548295cbbbae1ac9c835e88f 2 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=777e2b62d8f31f0f05ea7527548295cbbbae1ac9c835e88f 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:25.771 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.I6l 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.I6l 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.I6l 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=363b042ae439c485b97f6a44b54f4f3502fb1d152432bb42 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qs8 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 363b042ae439c485b97f6a44b54f4f3502fb1d152432bb42 2 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 363b042ae439c485b97f6a44b54f4f3502fb1d152432bb42 2 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=363b042ae439c485b97f6a44b54f4f3502fb1d152432bb42 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:26.029 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qs8 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qs8 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.qs8 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=08abe6b0d0d457865ea96d9215afa6e4 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZkH 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 08abe6b0d0d457865ea96d9215afa6e4 1 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 08abe6b0d0d457865ea96d9215afa6e4 1 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=08abe6b0d0d457865ea96d9215afa6e4 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZkH 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZkH 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ZkH 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9c6f80965edbbfbff1db0a441f9fc481233dca6065c84b9306f83352175bd106 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QUy 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9c6f80965edbbfbff1db0a441f9fc481233dca6065c84b9306f83352175bd106 3 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9c6f80965edbbfbff1db0a441f9fc481233dca6065c84b9306f83352175bd106 3 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9c6f80965edbbfbff1db0a441f9fc481233dca6065c84b9306f83352175bd106 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QUy 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QUy 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.QUy 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1743315 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1743315 ']' 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.030 05:38:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1743435 /var/tmp/host.sock 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1743435 ']' 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:26.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.289 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OTa 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.OTa 00:16:26.548 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.OTa 00:16:26.808 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.H7L ]] 00:16:26.808 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H7L 00:16:26.808 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.808 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.808 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.808 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H7L 00:16:26.808 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H7L 00:16:27.066 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.066 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IRS 00:16:27.066 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.066 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.066 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.066 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.IRS 00:16:27.066 05:38:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.IRS 00:16:27.066 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.I6l ]] 00:16:27.066 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.I6l 00:16:27.066 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.067 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.067 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.067 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.I6l 00:16:27.067 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.I6l 00:16:27.325 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.325 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qs8 00:16:27.325 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.325 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.325 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.325 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qs8 00:16:27.325 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qs8 00:16:27.584 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ZkH ]] 00:16:27.584 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZkH 00:16:27.584 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.584 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.584 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.584 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZkH 00:16:27.584 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZkH 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QUy 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.QUy 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.QUy 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.844 05:38:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.103 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.361 00:16:28.361 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.361 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.361 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.620 { 00:16:28.620 "cntlid": 1, 00:16:28.620 "qid": 0, 00:16:28.620 "state": "enabled", 00:16:28.620 "thread": "nvmf_tgt_poll_group_000", 00:16:28.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.620 "listen_address": { 00:16:28.620 "trtype": "TCP", 00:16:28.620 "adrfam": "IPv4", 00:16:28.620 "traddr": "10.0.0.2", 00:16:28.620 "trsvcid": "4420" 00:16:28.620 }, 00:16:28.620 "peer_address": { 00:16:28.620 "trtype": "TCP", 00:16:28.620 "adrfam": "IPv4", 00:16:28.620 "traddr": "10.0.0.1", 00:16:28.620 "trsvcid": "57488" 00:16:28.620 }, 00:16:28.620 "auth": { 00:16:28.620 "state": "completed", 00:16:28.620 "digest": "sha256", 00:16:28.620 "dhgroup": "null" 00:16:28.620 } 00:16:28.620 } 00:16:28.620 ]' 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.620 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.621 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.621 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.621 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.879 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:28.879 05:38:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:29.447 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.447 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.447 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.448 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.448 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.448 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.448 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.448 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.706 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:29.706 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.706 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.706 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.706 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.707 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.965 00:16:29.965 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.965 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.965 05:38:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.223 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.223 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.223 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.223 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.223 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.223 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.223 { 00:16:30.223 "cntlid": 3, 00:16:30.223 "qid": 0, 00:16:30.223 "state": "enabled", 00:16:30.223 "thread": "nvmf_tgt_poll_group_000", 00:16:30.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:30.223 "listen_address": { 00:16:30.223 "trtype": "TCP", 00:16:30.223 "adrfam": "IPv4", 00:16:30.223 "traddr": "10.0.0.2", 00:16:30.224 "trsvcid": "4420" 00:16:30.224 }, 00:16:30.224 "peer_address": { 00:16:30.224 "trtype": "TCP", 00:16:30.224 "adrfam": "IPv4", 00:16:30.224 "traddr": "10.0.0.1", 00:16:30.224 "trsvcid": "57514" 00:16:30.224 }, 00:16:30.224 "auth": { 00:16:30.224 "state": "completed", 00:16:30.224 "digest": "sha256", 00:16:30.224 "dhgroup": "null" 00:16:30.224 } 00:16:30.224 } 00:16:30.224 ]' 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.224 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.482 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:30.482 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.049 05:38:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.308 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.567 00:16:31.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.567 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.827 { 00:16:31.827 "cntlid": 5, 00:16:31.827 "qid": 0, 00:16:31.827 "state": "enabled", 00:16:31.827 "thread": "nvmf_tgt_poll_group_000", 00:16:31.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.827 "listen_address": { 00:16:31.827 "trtype": "TCP", 00:16:31.827 "adrfam": "IPv4", 00:16:31.827 "traddr": "10.0.0.2", 00:16:31.827 "trsvcid": "4420" 00:16:31.827 }, 00:16:31.827 "peer_address": { 00:16:31.827 "trtype": "TCP", 00:16:31.827 "adrfam": "IPv4", 00:16:31.827 "traddr": "10.0.0.1", 00:16:31.827 "trsvcid": "57546" 00:16:31.827 }, 00:16:31.827 "auth": { 00:16:31.827 "state": "completed", 00:16:31.827 "digest": "sha256", 00:16:31.827 "dhgroup": "null" 00:16:31.827 } 00:16:31.827 } 00:16:31.827 ]' 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.827 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.086 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:32.086 05:38:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.685 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.944 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:32.944 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.944 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.944 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.944 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.944 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.945 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:32.945 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.945 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.945 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.945 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.945 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.945 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.203 00:16:33.203 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.203 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.203 05:38:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.204 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.204 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.204 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.204 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.204 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.204 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.204 { 00:16:33.204 "cntlid": 7, 00:16:33.204 "qid": 0, 00:16:33.204 "state": "enabled", 00:16:33.204 "thread": "nvmf_tgt_poll_group_000", 00:16:33.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.204 "listen_address": { 00:16:33.204 "trtype": "TCP", 00:16:33.204 "adrfam": "IPv4", 00:16:33.204 "traddr": "10.0.0.2", 00:16:33.204 "trsvcid": "4420" 00:16:33.204 }, 00:16:33.204 "peer_address": { 00:16:33.204 "trtype": "TCP", 00:16:33.204 "adrfam": "IPv4", 00:16:33.204 "traddr": "10.0.0.1", 00:16:33.204 "trsvcid": "57572" 00:16:33.204 }, 00:16:33.204 "auth": { 00:16:33.204 "state": "completed", 00:16:33.204 "digest": "sha256", 00:16:33.204 "dhgroup": "null" 00:16:33.204 } 00:16:33.204 } 00:16:33.204 ]' 00:16:33.204 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.463 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.463 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.463 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:33.463 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.463 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.463 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.463 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.722 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:33.722 05:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.320 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.578 00:16:34.578 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.578 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.578 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.837 { 00:16:34.837 "cntlid": 9, 00:16:34.837 "qid": 0, 00:16:34.837 "state": "enabled", 00:16:34.837 "thread": "nvmf_tgt_poll_group_000", 00:16:34.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.837 "listen_address": { 00:16:34.837 "trtype": "TCP", 00:16:34.837 "adrfam": "IPv4", 00:16:34.837 "traddr": "10.0.0.2", 00:16:34.837 "trsvcid": "4420" 00:16:34.837 }, 00:16:34.837 "peer_address": { 00:16:34.837 "trtype": "TCP", 00:16:34.837 "adrfam": "IPv4", 00:16:34.837 "traddr": "10.0.0.1", 00:16:34.837 "trsvcid": "57594" 00:16:34.837 }, 00:16:34.837 "auth": { 00:16:34.837 "state": "completed", 00:16:34.837 "digest": "sha256", 00:16:34.837 "dhgroup": "ffdhe2048" 00:16:34.837 } 00:16:34.837 } 00:16:34.837 ]' 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.837 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.096 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.096 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.096 05:38:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.096 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:35.096 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.663 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.922 05:38:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.182 00:16:36.182 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.182 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.182 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.441 { 00:16:36.441 "cntlid": 11, 00:16:36.441 "qid": 0, 00:16:36.441 "state": "enabled", 00:16:36.441 "thread": "nvmf_tgt_poll_group_000", 00:16:36.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.441 "listen_address": { 00:16:36.441 "trtype": "TCP", 00:16:36.441 "adrfam": "IPv4", 00:16:36.441 "traddr": "10.0.0.2", 00:16:36.441 "trsvcid": "4420" 00:16:36.441 }, 00:16:36.441 "peer_address": { 00:16:36.441 "trtype": "TCP", 00:16:36.441 "adrfam": "IPv4", 00:16:36.441 "traddr": "10.0.0.1", 00:16:36.441 "trsvcid": "57628" 00:16:36.441 }, 00:16:36.441 "auth": { 00:16:36.441 "state": "completed", 00:16:36.441 "digest": "sha256", 00:16:36.441 "dhgroup": "ffdhe2048" 00:16:36.441 } 00:16:36.441 } 00:16:36.441 ]' 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.441 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.701 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.701 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.701 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.701 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:36.701 05:38:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.268 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.527 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.786 00:16:37.786 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.786 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.786 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.045 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.045 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.045 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.045 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.045 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.045 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.045 { 00:16:38.045 "cntlid": 13, 00:16:38.045 "qid": 0, 00:16:38.045 "state": "enabled", 00:16:38.045 "thread": "nvmf_tgt_poll_group_000", 00:16:38.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.045 "listen_address": { 00:16:38.045 "trtype": "TCP", 00:16:38.045 "adrfam": "IPv4", 00:16:38.045 "traddr": "10.0.0.2", 00:16:38.045 "trsvcid": "4420" 00:16:38.045 }, 00:16:38.046 "peer_address": { 00:16:38.046 "trtype": "TCP", 00:16:38.046 "adrfam": "IPv4", 00:16:38.046 "traddr": "10.0.0.1", 00:16:38.046 "trsvcid": "48632" 00:16:38.046 }, 00:16:38.046 "auth": { 00:16:38.046 "state": "completed", 00:16:38.046 "digest": "sha256", 00:16:38.046 "dhgroup": "ffdhe2048" 00:16:38.046 } 00:16:38.046 } 00:16:38.046 ]' 00:16:38.046 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.046 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.046 05:38:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.046 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.046 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.304 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.304 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.304 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.304 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:38.304 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.872 05:38:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.131 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.390 00:16:39.390 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.390 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.390 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.648 { 00:16:39.648 "cntlid": 15, 00:16:39.648 "qid": 0, 00:16:39.648 "state": "enabled", 00:16:39.648 "thread": "nvmf_tgt_poll_group_000", 00:16:39.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.648 "listen_address": { 00:16:39.648 "trtype": "TCP", 00:16:39.648 "adrfam": "IPv4", 00:16:39.648 "traddr": "10.0.0.2", 00:16:39.648 "trsvcid": "4420" 00:16:39.648 }, 00:16:39.648 "peer_address": { 00:16:39.648 "trtype": "TCP", 00:16:39.648 "adrfam": "IPv4", 00:16:39.648 "traddr": "10.0.0.1", 00:16:39.648 "trsvcid": "48654" 00:16:39.648 }, 00:16:39.648 "auth": { 00:16:39.648 "state": "completed", 00:16:39.648 "digest": "sha256", 00:16:39.648 "dhgroup": "ffdhe2048" 00:16:39.648 } 00:16:39.648 } 00:16:39.648 ]' 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.648 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.906 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.906 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.906 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.906 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:39.906 05:38:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.474 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.733 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.992 00:16:40.992 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.992 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.992 05:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.251 { 00:16:41.251 "cntlid": 17, 00:16:41.251 "qid": 0, 00:16:41.251 "state": "enabled", 00:16:41.251 "thread": "nvmf_tgt_poll_group_000", 00:16:41.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.251 "listen_address": { 00:16:41.251 "trtype": "TCP", 00:16:41.251 "adrfam": "IPv4", 00:16:41.251 "traddr": "10.0.0.2", 00:16:41.251 "trsvcid": "4420" 00:16:41.251 }, 00:16:41.251 "peer_address": { 00:16:41.251 "trtype": "TCP", 00:16:41.251 "adrfam": "IPv4", 00:16:41.251 "traddr": "10.0.0.1", 00:16:41.251 "trsvcid": "48692" 00:16:41.251 }, 00:16:41.251 "auth": { 00:16:41.251 "state": "completed", 00:16:41.251 "digest": "sha256", 00:16:41.251 "dhgroup": "ffdhe3072" 00:16:41.251 } 00:16:41.251 } 00:16:41.251 ]' 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.251 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.509 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:41.509 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:42.073 05:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.073 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.073 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.073 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.073 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.073 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.073 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.073 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.331 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.590 00:16:42.590 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.590 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.590 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.850 { 00:16:42.850 "cntlid": 19, 00:16:42.850 "qid": 0, 00:16:42.850 "state": "enabled", 00:16:42.850 "thread": "nvmf_tgt_poll_group_000", 00:16:42.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.850 "listen_address": { 00:16:42.850 "trtype": "TCP", 00:16:42.850 "adrfam": "IPv4", 00:16:42.850 "traddr": "10.0.0.2", 00:16:42.850 "trsvcid": "4420" 00:16:42.850 }, 00:16:42.850 "peer_address": { 00:16:42.850 "trtype": "TCP", 00:16:42.850 "adrfam": "IPv4", 00:16:42.850 "traddr": "10.0.0.1", 00:16:42.850 "trsvcid": "48706" 00:16:42.850 }, 00:16:42.850 "auth": { 00:16:42.850 "state": "completed", 00:16:42.850 "digest": "sha256", 00:16:42.850 "dhgroup": "ffdhe3072" 00:16:42.850 } 00:16:42.850 } 00:16:42.850 ]' 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.850 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.109 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.109 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.109 05:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.109 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:43.109 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:43.676 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.677 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.677 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.677 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.677 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.677 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.677 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.677 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.935 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:43.935 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.935 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.935 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.936 05:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.195 00:16:44.195 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.195 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.195 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.454 { 00:16:44.454 "cntlid": 21, 00:16:44.454 "qid": 0, 00:16:44.454 "state": "enabled", 00:16:44.454 "thread": "nvmf_tgt_poll_group_000", 00:16:44.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:44.454 "listen_address": { 00:16:44.454 "trtype": "TCP", 00:16:44.454 "adrfam": "IPv4", 00:16:44.454 "traddr": "10.0.0.2", 00:16:44.454 "trsvcid": "4420" 00:16:44.454 }, 00:16:44.454 "peer_address": { 00:16:44.454 "trtype": "TCP", 00:16:44.454 "adrfam": "IPv4", 00:16:44.454 "traddr": "10.0.0.1", 00:16:44.454 "trsvcid": "48738" 00:16:44.454 }, 00:16:44.454 "auth": { 00:16:44.454 "state": "completed", 00:16:44.454 "digest": "sha256", 00:16:44.454 "dhgroup": "ffdhe3072" 00:16:44.454 } 00:16:44.454 } 00:16:44.454 ]' 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.454 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.713 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:44.713 05:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.279 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.538 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:45.538 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.538 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.538 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.538 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.538 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.538 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:45.539 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.539 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.539 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.539 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.539 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.539 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.798 00:16:45.798 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.798 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.798 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.059 { 00:16:46.059 "cntlid": 23, 00:16:46.059 "qid": 0, 00:16:46.059 "state": "enabled", 00:16:46.059 "thread": "nvmf_tgt_poll_group_000", 00:16:46.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:46.059 "listen_address": { 00:16:46.059 "trtype": "TCP", 00:16:46.059 "adrfam": "IPv4", 00:16:46.059 "traddr": "10.0.0.2", 00:16:46.059 "trsvcid": "4420" 00:16:46.059 }, 00:16:46.059 "peer_address": { 00:16:46.059 "trtype": "TCP", 00:16:46.059 "adrfam": "IPv4", 00:16:46.059 "traddr": "10.0.0.1", 00:16:46.059 "trsvcid": "48764" 00:16:46.059 }, 00:16:46.059 "auth": { 00:16:46.059 "state": "completed", 00:16:46.059 "digest": "sha256", 00:16:46.059 "dhgroup": "ffdhe3072" 00:16:46.059 } 00:16:46.059 } 00:16:46.059 ]' 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.059 05:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.059 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.059 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.059 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.318 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:46.318 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.885 05:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.145 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.404 00:16:47.404 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.404 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.404 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.663 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.663 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.663 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.663 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.664 { 00:16:47.664 "cntlid": 25, 00:16:47.664 "qid": 0, 00:16:47.664 "state": "enabled", 00:16:47.664 "thread": "nvmf_tgt_poll_group_000", 00:16:47.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.664 "listen_address": { 00:16:47.664 "trtype": "TCP", 00:16:47.664 "adrfam": "IPv4", 00:16:47.664 "traddr": "10.0.0.2", 00:16:47.664 "trsvcid": "4420" 00:16:47.664 }, 00:16:47.664 "peer_address": { 00:16:47.664 "trtype": "TCP", 00:16:47.664 "adrfam": "IPv4", 00:16:47.664 "traddr": "10.0.0.1", 00:16:47.664 "trsvcid": "48774" 00:16:47.664 }, 00:16:47.664 "auth": { 00:16:47.664 "state": "completed", 00:16:47.664 "digest": "sha256", 00:16:47.664 "dhgroup": "ffdhe4096" 00:16:47.664 } 00:16:47.664 } 00:16:47.664 ]' 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.664 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.923 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:47.923 05:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.488 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.746 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.005 00:16:49.005 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.005 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.005 05:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.264 { 00:16:49.264 "cntlid": 27, 00:16:49.264 "qid": 0, 00:16:49.264 "state": "enabled", 00:16:49.264 "thread": "nvmf_tgt_poll_group_000", 00:16:49.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:49.264 "listen_address": { 00:16:49.264 "trtype": "TCP", 00:16:49.264 "adrfam": "IPv4", 00:16:49.264 "traddr": "10.0.0.2", 00:16:49.264 "trsvcid": "4420" 00:16:49.264 }, 00:16:49.264 "peer_address": { 00:16:49.264 "trtype": "TCP", 00:16:49.264 "adrfam": "IPv4", 00:16:49.264 "traddr": "10.0.0.1", 00:16:49.264 "trsvcid": "47074" 00:16:49.264 }, 00:16:49.264 "auth": { 00:16:49.264 "state": "completed", 00:16:49.264 "digest": "sha256", 00:16:49.264 "dhgroup": "ffdhe4096" 00:16:49.264 } 00:16:49.264 } 00:16:49.264 ]' 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.264 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.523 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:49.523 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:50.090 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.090 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.090 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.090 05:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.090 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.090 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.090 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.090 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.365 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:50.365 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.365 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.365 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.365 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.365 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.366 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.366 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.366 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.366 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.366 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.366 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.366 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.648 00:16:50.648 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.648 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.648 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.950 { 00:16:50.950 "cntlid": 29, 00:16:50.950 "qid": 0, 00:16:50.950 "state": "enabled", 00:16:50.950 "thread": "nvmf_tgt_poll_group_000", 00:16:50.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.950 "listen_address": { 00:16:50.950 "trtype": "TCP", 00:16:50.950 "adrfam": "IPv4", 00:16:50.950 "traddr": "10.0.0.2", 00:16:50.950 "trsvcid": "4420" 00:16:50.950 }, 00:16:50.950 "peer_address": { 00:16:50.950 "trtype": "TCP", 00:16:50.950 "adrfam": "IPv4", 00:16:50.950 "traddr": "10.0.0.1", 00:16:50.950 "trsvcid": "47098" 00:16:50.950 }, 00:16:50.950 "auth": { 00:16:50.950 "state": "completed", 00:16:50.950 "digest": "sha256", 00:16:50.950 "dhgroup": "ffdhe4096" 00:16:50.950 } 00:16:50.950 } 00:16:50.950 ]' 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.950 05:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.235 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:51.235 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:51.799 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.799 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.800 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.800 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.800 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.800 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.800 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.800 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.059 05:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.318 00:16:52.318 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.318 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.318 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.577 { 00:16:52.577 "cntlid": 31, 00:16:52.577 "qid": 0, 00:16:52.577 "state": "enabled", 00:16:52.577 "thread": "nvmf_tgt_poll_group_000", 00:16:52.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.577 "listen_address": { 00:16:52.577 "trtype": "TCP", 00:16:52.577 "adrfam": "IPv4", 00:16:52.577 "traddr": "10.0.0.2", 00:16:52.577 "trsvcid": "4420" 00:16:52.577 }, 00:16:52.577 "peer_address": { 00:16:52.577 "trtype": "TCP", 00:16:52.577 "adrfam": "IPv4", 00:16:52.577 "traddr": "10.0.0.1", 00:16:52.577 "trsvcid": "47136" 00:16:52.577 }, 00:16:52.577 "auth": { 00:16:52.577 "state": "completed", 00:16:52.577 "digest": "sha256", 00:16:52.577 "dhgroup": "ffdhe4096" 00:16:52.577 } 00:16:52.577 } 00:16:52.577 ]' 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.577 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.836 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:52.836 05:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.401 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.660 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.919 00:16:53.919 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.919 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.919 05:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.178 { 00:16:54.178 "cntlid": 33, 00:16:54.178 "qid": 0, 00:16:54.178 "state": "enabled", 00:16:54.178 "thread": "nvmf_tgt_poll_group_000", 00:16:54.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.178 "listen_address": { 00:16:54.178 "trtype": "TCP", 00:16:54.178 "adrfam": "IPv4", 00:16:54.178 "traddr": "10.0.0.2", 00:16:54.178 "trsvcid": "4420" 00:16:54.178 }, 00:16:54.178 "peer_address": { 00:16:54.178 "trtype": "TCP", 00:16:54.178 "adrfam": "IPv4", 00:16:54.178 "traddr": "10.0.0.1", 00:16:54.178 "trsvcid": "47164" 00:16:54.178 }, 00:16:54.178 "auth": { 00:16:54.178 "state": "completed", 00:16:54.178 "digest": "sha256", 00:16:54.178 "dhgroup": "ffdhe6144" 00:16:54.178 } 00:16:54.178 } 00:16:54.178 ]' 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.178 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.437 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:54.437 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.003 05:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.262 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.522 00:16:55.522 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.522 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.522 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.781 { 00:16:55.781 "cntlid": 35, 00:16:55.781 "qid": 0, 00:16:55.781 "state": "enabled", 00:16:55.781 "thread": "nvmf_tgt_poll_group_000", 00:16:55.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.781 "listen_address": { 00:16:55.781 "trtype": "TCP", 00:16:55.781 "adrfam": "IPv4", 00:16:55.781 "traddr": "10.0.0.2", 00:16:55.781 "trsvcid": "4420" 00:16:55.781 }, 00:16:55.781 "peer_address": { 00:16:55.781 "trtype": "TCP", 00:16:55.781 "adrfam": "IPv4", 00:16:55.781 "traddr": "10.0.0.1", 00:16:55.781 "trsvcid": "47192" 00:16:55.781 }, 00:16:55.781 "auth": { 00:16:55.781 "state": "completed", 00:16:55.781 "digest": "sha256", 00:16:55.781 "dhgroup": "ffdhe6144" 00:16:55.781 } 00:16:55.781 } 00:16:55.781 ]' 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.781 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.782 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.041 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.041 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.041 05:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.041 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:56.041 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.610 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.869 05:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.127 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.387 { 00:16:57.387 "cntlid": 37, 00:16:57.387 "qid": 0, 00:16:57.387 "state": "enabled", 00:16:57.387 "thread": "nvmf_tgt_poll_group_000", 00:16:57.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.387 "listen_address": { 00:16:57.387 "trtype": "TCP", 00:16:57.387 "adrfam": "IPv4", 00:16:57.387 "traddr": "10.0.0.2", 00:16:57.387 "trsvcid": "4420" 00:16:57.387 }, 00:16:57.387 "peer_address": { 00:16:57.387 "trtype": "TCP", 00:16:57.387 "adrfam": "IPv4", 00:16:57.387 "traddr": "10.0.0.1", 00:16:57.387 "trsvcid": "47218" 00:16:57.387 }, 00:16:57.387 "auth": { 00:16:57.387 "state": "completed", 00:16:57.387 "digest": "sha256", 00:16:57.387 "dhgroup": "ffdhe6144" 00:16:57.387 } 00:16:57.387 } 00:16:57.387 ]' 00:16:57.387 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.646 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.646 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.646 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.646 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.646 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.646 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.646 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.905 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:57.905 05:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:16:58.474 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.475 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.734 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.734 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.734 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.734 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.993 00:16:58.993 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.993 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.993 05:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.251 { 00:16:59.251 "cntlid": 39, 00:16:59.251 "qid": 0, 00:16:59.251 "state": "enabled", 00:16:59.251 "thread": "nvmf_tgt_poll_group_000", 00:16:59.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.251 "listen_address": { 00:16:59.251 "trtype": "TCP", 00:16:59.251 "adrfam": "IPv4", 00:16:59.251 "traddr": "10.0.0.2", 00:16:59.251 "trsvcid": "4420" 00:16:59.251 }, 00:16:59.251 "peer_address": { 00:16:59.251 "trtype": "TCP", 00:16:59.251 "adrfam": "IPv4", 00:16:59.251 "traddr": "10.0.0.1", 00:16:59.251 "trsvcid": "55496" 00:16:59.251 }, 00:16:59.251 "auth": { 00:16:59.251 "state": "completed", 00:16:59.251 "digest": "sha256", 00:16:59.251 "dhgroup": "ffdhe6144" 00:16:59.251 } 00:16:59.251 } 00:16:59.251 ]' 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.251 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.252 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.510 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:16:59.510 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.077 05:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.335 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.903 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.903 { 00:17:00.903 "cntlid": 41, 00:17:00.903 "qid": 0, 00:17:00.903 "state": "enabled", 00:17:00.903 "thread": "nvmf_tgt_poll_group_000", 00:17:00.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:00.903 "listen_address": { 00:17:00.903 "trtype": "TCP", 00:17:00.903 "adrfam": "IPv4", 00:17:00.903 "traddr": "10.0.0.2", 00:17:00.903 "trsvcid": "4420" 00:17:00.903 }, 00:17:00.903 "peer_address": { 00:17:00.903 "trtype": "TCP", 00:17:00.903 "adrfam": "IPv4", 00:17:00.903 "traddr": "10.0.0.1", 00:17:00.903 "trsvcid": "55532" 00:17:00.903 }, 00:17:00.903 "auth": { 00:17:00.903 "state": "completed", 00:17:00.903 "digest": "sha256", 00:17:00.903 "dhgroup": "ffdhe8192" 00:17:00.903 } 00:17:00.903 } 00:17:00.903 ]' 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.903 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.163 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.163 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.163 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.163 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.163 05:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.420 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:01.420 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:01.987 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.988 05:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.555 00:17:02.555 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.555 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.555 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.813 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.813 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.813 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.813 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.813 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.813 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.813 { 00:17:02.813 "cntlid": 43, 00:17:02.813 "qid": 0, 00:17:02.813 "state": "enabled", 00:17:02.813 "thread": "nvmf_tgt_poll_group_000", 00:17:02.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.813 "listen_address": { 00:17:02.813 "trtype": "TCP", 00:17:02.813 "adrfam": "IPv4", 00:17:02.813 "traddr": "10.0.0.2", 00:17:02.813 "trsvcid": "4420" 00:17:02.813 }, 00:17:02.813 "peer_address": { 00:17:02.813 "trtype": "TCP", 00:17:02.813 "adrfam": "IPv4", 00:17:02.813 "traddr": "10.0.0.1", 00:17:02.813 "trsvcid": "55566" 00:17:02.813 }, 00:17:02.813 "auth": { 00:17:02.813 "state": "completed", 00:17:02.813 "digest": "sha256", 00:17:02.813 "dhgroup": "ffdhe8192" 00:17:02.813 } 00:17:02.813 } 00:17:02.813 ]' 00:17:02.813 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.814 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.814 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.814 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.814 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.814 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.814 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.814 05:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.074 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:03.074 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.642 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.901 05:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.470 00:17:04.470 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.470 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.470 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.729 { 00:17:04.729 "cntlid": 45, 00:17:04.729 "qid": 0, 00:17:04.729 "state": "enabled", 00:17:04.729 "thread": "nvmf_tgt_poll_group_000", 00:17:04.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.729 "listen_address": { 00:17:04.729 "trtype": "TCP", 00:17:04.729 "adrfam": "IPv4", 00:17:04.729 "traddr": "10.0.0.2", 00:17:04.729 "trsvcid": "4420" 00:17:04.729 }, 00:17:04.729 "peer_address": { 00:17:04.729 "trtype": "TCP", 00:17:04.729 "adrfam": "IPv4", 00:17:04.729 "traddr": "10.0.0.1", 00:17:04.729 "trsvcid": "55586" 00:17:04.729 }, 00:17:04.729 "auth": { 00:17:04.729 "state": "completed", 00:17:04.729 "digest": "sha256", 00:17:04.729 "dhgroup": "ffdhe8192" 00:17:04.729 } 00:17:04.729 } 00:17:04.729 ]' 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.729 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.987 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:04.988 05:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.555 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.815 05:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.387 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.387 { 00:17:06.387 "cntlid": 47, 00:17:06.387 "qid": 0, 00:17:06.387 "state": "enabled", 00:17:06.387 "thread": "nvmf_tgt_poll_group_000", 00:17:06.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:06.387 "listen_address": { 00:17:06.387 "trtype": "TCP", 00:17:06.387 "adrfam": "IPv4", 00:17:06.387 "traddr": "10.0.0.2", 00:17:06.387 "trsvcid": "4420" 00:17:06.387 }, 00:17:06.387 "peer_address": { 00:17:06.387 "trtype": "TCP", 00:17:06.387 "adrfam": "IPv4", 00:17:06.387 "traddr": "10.0.0.1", 00:17:06.387 "trsvcid": "55610" 00:17:06.387 }, 00:17:06.387 "auth": { 00:17:06.387 "state": "completed", 00:17:06.387 "digest": "sha256", 00:17:06.387 "dhgroup": "ffdhe8192" 00:17:06.387 } 00:17:06.387 } 00:17:06.387 ]' 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.387 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.647 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.647 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.647 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.647 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.647 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.647 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:06.647 05:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:07.213 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.213 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.214 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.472 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.473 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.473 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.731 00:17:07.731 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.731 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.731 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.989 { 00:17:07.989 "cntlid": 49, 00:17:07.989 "qid": 0, 00:17:07.989 "state": "enabled", 00:17:07.989 "thread": "nvmf_tgt_poll_group_000", 00:17:07.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.989 "listen_address": { 00:17:07.989 "trtype": "TCP", 00:17:07.989 "adrfam": "IPv4", 00:17:07.989 "traddr": "10.0.0.2", 00:17:07.989 "trsvcid": "4420" 00:17:07.989 }, 00:17:07.989 "peer_address": { 00:17:07.989 "trtype": "TCP", 00:17:07.989 "adrfam": "IPv4", 00:17:07.989 "traddr": "10.0.0.1", 00:17:07.989 "trsvcid": "50326" 00:17:07.989 }, 00:17:07.989 "auth": { 00:17:07.989 "state": "completed", 00:17:07.989 "digest": "sha384", 00:17:07.989 "dhgroup": "null" 00:17:07.989 } 00:17:07.989 } 00:17:07.989 ]' 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.989 05:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.247 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.247 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.247 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.247 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:08.248 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.833 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.091 05:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.350 00:17:09.350 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.350 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.350 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.610 { 00:17:09.610 "cntlid": 51, 00:17:09.610 "qid": 0, 00:17:09.610 "state": "enabled", 00:17:09.610 "thread": "nvmf_tgt_poll_group_000", 00:17:09.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.610 "listen_address": { 00:17:09.610 "trtype": "TCP", 00:17:09.610 "adrfam": "IPv4", 00:17:09.610 "traddr": "10.0.0.2", 00:17:09.610 "trsvcid": "4420" 00:17:09.610 }, 00:17:09.610 "peer_address": { 00:17:09.610 "trtype": "TCP", 00:17:09.610 "adrfam": "IPv4", 00:17:09.610 "traddr": "10.0.0.1", 00:17:09.610 "trsvcid": "50360" 00:17:09.610 }, 00:17:09.610 "auth": { 00:17:09.610 "state": "completed", 00:17:09.610 "digest": "sha384", 00:17:09.610 "dhgroup": "null" 00:17:09.610 } 00:17:09.610 } 00:17:09.610 ]' 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.610 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.869 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:09.869 05:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.437 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.696 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.954 00:17:10.954 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.954 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.954 05:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.212 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.212 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.212 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.212 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.212 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.212 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.212 { 00:17:11.212 "cntlid": 53, 00:17:11.212 "qid": 0, 00:17:11.212 "state": "enabled", 00:17:11.212 "thread": "nvmf_tgt_poll_group_000", 00:17:11.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:11.212 "listen_address": { 00:17:11.213 "trtype": "TCP", 00:17:11.213 "adrfam": "IPv4", 00:17:11.213 "traddr": "10.0.0.2", 00:17:11.213 "trsvcid": "4420" 00:17:11.213 }, 00:17:11.213 "peer_address": { 00:17:11.213 "trtype": "TCP", 00:17:11.213 "adrfam": "IPv4", 00:17:11.213 "traddr": "10.0.0.1", 00:17:11.213 "trsvcid": "50392" 00:17:11.213 }, 00:17:11.213 "auth": { 00:17:11.213 "state": "completed", 00:17:11.213 "digest": "sha384", 00:17:11.213 "dhgroup": "null" 00:17:11.213 } 00:17:11.213 } 00:17:11.213 ]' 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.213 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.470 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:11.470 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.038 05:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.297 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.557 00:17:12.557 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.557 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.557 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.816 { 00:17:12.816 "cntlid": 55, 00:17:12.816 "qid": 0, 00:17:12.816 "state": "enabled", 00:17:12.816 "thread": "nvmf_tgt_poll_group_000", 00:17:12.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.816 "listen_address": { 00:17:12.816 "trtype": "TCP", 00:17:12.816 "adrfam": "IPv4", 00:17:12.816 "traddr": "10.0.0.2", 00:17:12.816 "trsvcid": "4420" 00:17:12.816 }, 00:17:12.816 "peer_address": { 00:17:12.816 "trtype": "TCP", 00:17:12.816 "adrfam": "IPv4", 00:17:12.816 "traddr": "10.0.0.1", 00:17:12.816 "trsvcid": "50416" 00:17:12.816 }, 00:17:12.816 "auth": { 00:17:12.816 "state": "completed", 00:17:12.816 "digest": "sha384", 00:17:12.816 "dhgroup": "null" 00:17:12.816 } 00:17:12.816 } 00:17:12.816 ]' 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.816 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.075 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:13.075 05:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.645 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.903 05:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.161 00:17:14.161 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.161 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.161 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.420 { 00:17:14.420 "cntlid": 57, 00:17:14.420 "qid": 0, 00:17:14.420 "state": "enabled", 00:17:14.420 "thread": "nvmf_tgt_poll_group_000", 00:17:14.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:14.420 "listen_address": { 00:17:14.420 "trtype": "TCP", 00:17:14.420 "adrfam": "IPv4", 00:17:14.420 "traddr": "10.0.0.2", 00:17:14.420 "trsvcid": "4420" 00:17:14.420 }, 00:17:14.420 "peer_address": { 00:17:14.420 "trtype": "TCP", 00:17:14.420 "adrfam": "IPv4", 00:17:14.420 "traddr": "10.0.0.1", 00:17:14.420 "trsvcid": "50452" 00:17:14.420 }, 00:17:14.420 "auth": { 00:17:14.420 "state": "completed", 00:17:14.420 "digest": "sha384", 00:17:14.420 "dhgroup": "ffdhe2048" 00:17:14.420 } 00:17:14.420 } 00:17:14.420 ]' 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.420 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.679 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:14.679 05:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.247 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.506 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.766 00:17:15.766 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.766 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.766 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.051 { 00:17:16.051 "cntlid": 59, 00:17:16.051 "qid": 0, 00:17:16.051 "state": "enabled", 00:17:16.051 "thread": "nvmf_tgt_poll_group_000", 00:17:16.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.051 "listen_address": { 00:17:16.051 "trtype": "TCP", 00:17:16.051 "adrfam": "IPv4", 00:17:16.051 "traddr": "10.0.0.2", 00:17:16.051 "trsvcid": "4420" 00:17:16.051 }, 00:17:16.051 "peer_address": { 00:17:16.051 "trtype": "TCP", 00:17:16.051 "adrfam": "IPv4", 00:17:16.051 "traddr": "10.0.0.1", 00:17:16.051 "trsvcid": "50480" 00:17:16.051 }, 00:17:16.051 "auth": { 00:17:16.051 "state": "completed", 00:17:16.051 "digest": "sha384", 00:17:16.051 "dhgroup": "ffdhe2048" 00:17:16.051 } 00:17:16.051 } 00:17:16.051 ]' 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.051 05:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.310 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:16.310 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.878 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.137 05:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.397 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.397 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.656 { 00:17:17.656 "cntlid": 61, 00:17:17.656 "qid": 0, 00:17:17.656 "state": "enabled", 00:17:17.656 "thread": "nvmf_tgt_poll_group_000", 00:17:17.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.656 "listen_address": { 00:17:17.656 "trtype": "TCP", 00:17:17.656 "adrfam": "IPv4", 00:17:17.656 "traddr": "10.0.0.2", 00:17:17.656 "trsvcid": "4420" 00:17:17.656 }, 00:17:17.656 "peer_address": { 00:17:17.656 "trtype": "TCP", 00:17:17.656 "adrfam": "IPv4", 00:17:17.656 "traddr": "10.0.0.1", 00:17:17.656 "trsvcid": "50514" 00:17:17.656 }, 00:17:17.656 "auth": { 00:17:17.656 "state": "completed", 00:17:17.656 "digest": "sha384", 00:17:17.656 "dhgroup": "ffdhe2048" 00:17:17.656 } 00:17:17.656 } 00:17:17.656 ]' 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.656 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.961 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:17.961 05:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.530 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.790 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.790 00:17:19.050 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.050 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.050 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.050 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.050 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.050 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.050 05:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.050 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.050 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.050 { 00:17:19.050 "cntlid": 63, 00:17:19.050 "qid": 0, 00:17:19.050 "state": "enabled", 00:17:19.050 "thread": "nvmf_tgt_poll_group_000", 00:17:19.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.050 "listen_address": { 00:17:19.050 "trtype": "TCP", 00:17:19.050 "adrfam": "IPv4", 00:17:19.050 "traddr": "10.0.0.2", 00:17:19.050 "trsvcid": "4420" 00:17:19.050 }, 00:17:19.050 "peer_address": { 00:17:19.050 "trtype": "TCP", 00:17:19.050 "adrfam": "IPv4", 00:17:19.050 "traddr": "10.0.0.1", 00:17:19.050 "trsvcid": "60542" 00:17:19.050 }, 00:17:19.050 "auth": { 00:17:19.050 "state": "completed", 00:17:19.050 "digest": "sha384", 00:17:19.050 "dhgroup": "ffdhe2048" 00:17:19.050 } 00:17:19.050 } 00:17:19.050 ]' 00:17:19.050 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.050 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.050 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.310 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.310 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.310 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.310 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.310 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.569 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:19.569 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.136 05:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.136 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.395 00:17:20.395 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.395 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.395 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.654 { 00:17:20.654 "cntlid": 65, 00:17:20.654 "qid": 0, 00:17:20.654 "state": "enabled", 00:17:20.654 "thread": "nvmf_tgt_poll_group_000", 00:17:20.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:20.654 "listen_address": { 00:17:20.654 "trtype": "TCP", 00:17:20.654 "adrfam": "IPv4", 00:17:20.654 "traddr": "10.0.0.2", 00:17:20.654 "trsvcid": "4420" 00:17:20.654 }, 00:17:20.654 "peer_address": { 00:17:20.654 "trtype": "TCP", 00:17:20.654 "adrfam": "IPv4", 00:17:20.654 "traddr": "10.0.0.1", 00:17:20.654 "trsvcid": "60572" 00:17:20.654 }, 00:17:20.654 "auth": { 00:17:20.654 "state": "completed", 00:17:20.654 "digest": "sha384", 00:17:20.654 "dhgroup": "ffdhe3072" 00:17:20.654 } 00:17:20.654 } 00:17:20.654 ]' 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.654 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.913 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.913 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.913 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.913 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.913 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.172 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:21.172 05:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.739 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.999 00:17:21.999 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.999 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.999 05:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.259 { 00:17:22.259 "cntlid": 67, 00:17:22.259 "qid": 0, 00:17:22.259 "state": "enabled", 00:17:22.259 "thread": "nvmf_tgt_poll_group_000", 00:17:22.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:22.259 "listen_address": { 00:17:22.259 "trtype": "TCP", 00:17:22.259 "adrfam": "IPv4", 00:17:22.259 "traddr": "10.0.0.2", 00:17:22.259 "trsvcid": "4420" 00:17:22.259 }, 00:17:22.259 "peer_address": { 00:17:22.259 "trtype": "TCP", 00:17:22.259 "adrfam": "IPv4", 00:17:22.259 "traddr": "10.0.0.1", 00:17:22.259 "trsvcid": "60596" 00:17:22.259 }, 00:17:22.259 "auth": { 00:17:22.259 "state": "completed", 00:17:22.259 "digest": "sha384", 00:17:22.259 "dhgroup": "ffdhe3072" 00:17:22.259 } 00:17:22.259 } 00:17:22.259 ]' 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.259 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.519 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.519 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.519 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.519 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.519 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.519 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:22.519 05:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:23.088 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.088 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.088 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.346 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.347 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.606 00:17:23.606 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.606 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.606 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.865 { 00:17:23.865 "cntlid": 69, 00:17:23.865 "qid": 0, 00:17:23.865 "state": "enabled", 00:17:23.865 "thread": "nvmf_tgt_poll_group_000", 00:17:23.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.865 "listen_address": { 00:17:23.865 "trtype": "TCP", 00:17:23.865 "adrfam": "IPv4", 00:17:23.865 "traddr": "10.0.0.2", 00:17:23.865 "trsvcid": "4420" 00:17:23.865 }, 00:17:23.865 "peer_address": { 00:17:23.865 "trtype": "TCP", 00:17:23.865 "adrfam": "IPv4", 00:17:23.865 "traddr": "10.0.0.1", 00:17:23.865 "trsvcid": "60630" 00:17:23.865 }, 00:17:23.865 "auth": { 00:17:23.865 "state": "completed", 00:17:23.865 "digest": "sha384", 00:17:23.865 "dhgroup": "ffdhe3072" 00:17:23.865 } 00:17:23.865 } 00:17:23.865 ]' 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.865 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.124 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.124 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.124 05:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.124 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:24.124 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:24.693 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.693 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.693 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.693 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.953 05:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.214 00:17:25.214 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.214 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.214 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.492 { 00:17:25.492 "cntlid": 71, 00:17:25.492 "qid": 0, 00:17:25.492 "state": "enabled", 00:17:25.492 "thread": "nvmf_tgt_poll_group_000", 00:17:25.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:25.492 "listen_address": { 00:17:25.492 "trtype": "TCP", 00:17:25.492 "adrfam": "IPv4", 00:17:25.492 "traddr": "10.0.0.2", 00:17:25.492 "trsvcid": "4420" 00:17:25.492 }, 00:17:25.492 "peer_address": { 00:17:25.492 "trtype": "TCP", 00:17:25.492 "adrfam": "IPv4", 00:17:25.492 "traddr": "10.0.0.1", 00:17:25.492 "trsvcid": "60670" 00:17:25.492 }, 00:17:25.492 "auth": { 00:17:25.492 "state": "completed", 00:17:25.492 "digest": "sha384", 00:17:25.492 "dhgroup": "ffdhe3072" 00:17:25.492 } 00:17:25.492 } 00:17:25.492 ]' 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.492 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.752 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:25.753 05:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.322 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.581 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.840 00:17:26.840 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.840 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.840 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.099 { 00:17:27.099 "cntlid": 73, 00:17:27.099 "qid": 0, 00:17:27.099 "state": "enabled", 00:17:27.099 "thread": "nvmf_tgt_poll_group_000", 00:17:27.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.099 "listen_address": { 00:17:27.099 "trtype": "TCP", 00:17:27.099 "adrfam": "IPv4", 00:17:27.099 "traddr": "10.0.0.2", 00:17:27.099 "trsvcid": "4420" 00:17:27.099 }, 00:17:27.099 "peer_address": { 00:17:27.099 "trtype": "TCP", 00:17:27.099 "adrfam": "IPv4", 00:17:27.099 "traddr": "10.0.0.1", 00:17:27.099 "trsvcid": "60694" 00:17:27.099 }, 00:17:27.099 "auth": { 00:17:27.099 "state": "completed", 00:17:27.099 "digest": "sha384", 00:17:27.099 "dhgroup": "ffdhe4096" 00:17:27.099 } 00:17:27.099 } 00:17:27.099 ]' 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.099 05:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.099 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.099 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.099 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.099 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.099 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.359 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:27.359 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.928 05:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.187 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.447 00:17:28.447 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.447 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.447 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.706 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.706 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.707 { 00:17:28.707 "cntlid": 75, 00:17:28.707 "qid": 0, 00:17:28.707 "state": "enabled", 00:17:28.707 "thread": "nvmf_tgt_poll_group_000", 00:17:28.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.707 "listen_address": { 00:17:28.707 "trtype": "TCP", 00:17:28.707 "adrfam": "IPv4", 00:17:28.707 "traddr": "10.0.0.2", 00:17:28.707 "trsvcid": "4420" 00:17:28.707 }, 00:17:28.707 "peer_address": { 00:17:28.707 "trtype": "TCP", 00:17:28.707 "adrfam": "IPv4", 00:17:28.707 "traddr": "10.0.0.1", 00:17:28.707 "trsvcid": "39536" 00:17:28.707 }, 00:17:28.707 "auth": { 00:17:28.707 "state": "completed", 00:17:28.707 "digest": "sha384", 00:17:28.707 "dhgroup": "ffdhe4096" 00:17:28.707 } 00:17:28.707 } 00:17:28.707 ]' 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.707 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.966 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:28.966 05:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.534 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.793 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.052 00:17:30.052 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.052 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.052 05:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.311 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.311 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.311 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.311 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.311 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.311 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.311 { 00:17:30.311 "cntlid": 77, 00:17:30.311 "qid": 0, 00:17:30.311 "state": "enabled", 00:17:30.311 "thread": "nvmf_tgt_poll_group_000", 00:17:30.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:30.311 "listen_address": { 00:17:30.311 "trtype": "TCP", 00:17:30.311 "adrfam": "IPv4", 00:17:30.311 "traddr": "10.0.0.2", 00:17:30.311 "trsvcid": "4420" 00:17:30.311 }, 00:17:30.312 "peer_address": { 00:17:30.312 "trtype": "TCP", 00:17:30.312 "adrfam": "IPv4", 00:17:30.312 "traddr": "10.0.0.1", 00:17:30.312 "trsvcid": "39572" 00:17:30.312 }, 00:17:30.312 "auth": { 00:17:30.312 "state": "completed", 00:17:30.312 "digest": "sha384", 00:17:30.312 "dhgroup": "ffdhe4096" 00:17:30.312 } 00:17:30.312 } 00:17:30.312 ]' 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.312 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.571 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:30.571 05:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.139 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.398 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.655 00:17:31.655 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.655 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.655 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.913 { 00:17:31.913 "cntlid": 79, 00:17:31.913 "qid": 0, 00:17:31.913 "state": "enabled", 00:17:31.913 "thread": "nvmf_tgt_poll_group_000", 00:17:31.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:31.913 "listen_address": { 00:17:31.913 "trtype": "TCP", 00:17:31.913 "adrfam": "IPv4", 00:17:31.913 "traddr": "10.0.0.2", 00:17:31.913 "trsvcid": "4420" 00:17:31.913 }, 00:17:31.913 "peer_address": { 00:17:31.913 "trtype": "TCP", 00:17:31.913 "adrfam": "IPv4", 00:17:31.913 "traddr": "10.0.0.1", 00:17:31.913 "trsvcid": "39602" 00:17:31.913 }, 00:17:31.913 "auth": { 00:17:31.913 "state": "completed", 00:17:31.913 "digest": "sha384", 00:17:31.913 "dhgroup": "ffdhe4096" 00:17:31.913 } 00:17:31.913 } 00:17:31.913 ]' 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.913 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.172 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.172 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.172 05:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.172 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:32.172 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.739 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.998 05:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.257 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.516 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.516 { 00:17:33.516 "cntlid": 81, 00:17:33.516 "qid": 0, 00:17:33.516 "state": "enabled", 00:17:33.516 "thread": "nvmf_tgt_poll_group_000", 00:17:33.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:33.516 "listen_address": { 00:17:33.516 "trtype": "TCP", 00:17:33.516 "adrfam": "IPv4", 00:17:33.516 "traddr": "10.0.0.2", 00:17:33.516 "trsvcid": "4420" 00:17:33.516 }, 00:17:33.516 "peer_address": { 00:17:33.517 "trtype": "TCP", 00:17:33.517 "adrfam": "IPv4", 00:17:33.517 "traddr": "10.0.0.1", 00:17:33.517 "trsvcid": "39630" 00:17:33.517 }, 00:17:33.517 "auth": { 00:17:33.517 "state": "completed", 00:17:33.517 "digest": "sha384", 00:17:33.517 "dhgroup": "ffdhe6144" 00:17:33.517 } 00:17:33.517 } 00:17:33.517 ]' 00:17:33.517 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.517 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.517 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.776 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.776 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.776 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.776 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.776 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.035 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:34.035 05:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.602 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.603 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.603 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.171 00:17:35.171 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.171 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.171 05:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.171 { 00:17:35.171 "cntlid": 83, 00:17:35.171 "qid": 0, 00:17:35.171 "state": "enabled", 00:17:35.171 "thread": "nvmf_tgt_poll_group_000", 00:17:35.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:35.171 "listen_address": { 00:17:35.171 "trtype": "TCP", 00:17:35.171 "adrfam": "IPv4", 00:17:35.171 "traddr": "10.0.0.2", 00:17:35.171 "trsvcid": "4420" 00:17:35.171 }, 00:17:35.171 "peer_address": { 00:17:35.171 "trtype": "TCP", 00:17:35.171 "adrfam": "IPv4", 00:17:35.171 "traddr": "10.0.0.1", 00:17:35.171 "trsvcid": "39666" 00:17:35.171 }, 00:17:35.171 "auth": { 00:17:35.171 "state": "completed", 00:17:35.171 "digest": "sha384", 00:17:35.171 "dhgroup": "ffdhe6144" 00:17:35.171 } 00:17:35.171 } 00:17:35.171 ]' 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.171 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.429 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.429 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.429 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.429 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.429 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.687 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:35.687 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:36.255 05:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.255 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.823 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.823 { 00:17:36.823 "cntlid": 85, 00:17:36.823 "qid": 0, 00:17:36.823 "state": "enabled", 00:17:36.823 "thread": "nvmf_tgt_poll_group_000", 00:17:36.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:36.823 "listen_address": { 00:17:36.823 "trtype": "TCP", 00:17:36.823 "adrfam": "IPv4", 00:17:36.823 "traddr": "10.0.0.2", 00:17:36.823 "trsvcid": "4420" 00:17:36.823 }, 00:17:36.823 "peer_address": { 00:17:36.823 "trtype": "TCP", 00:17:36.823 "adrfam": "IPv4", 00:17:36.823 "traddr": "10.0.0.1", 00:17:36.823 "trsvcid": "39692" 00:17:36.823 }, 00:17:36.823 "auth": { 00:17:36.823 "state": "completed", 00:17:36.823 "digest": "sha384", 00:17:36.823 "dhgroup": "ffdhe6144" 00:17:36.823 } 00:17:36.823 } 00:17:36.823 ]' 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.823 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.083 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.083 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.083 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.083 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.083 05:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.341 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:37.341 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:37.907 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.908 05:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:38.477 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.477 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.478 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.478 { 00:17:38.478 "cntlid": 87, 00:17:38.478 "qid": 0, 00:17:38.478 "state": "enabled", 00:17:38.478 "thread": "nvmf_tgt_poll_group_000", 00:17:38.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:38.478 "listen_address": { 00:17:38.478 "trtype": "TCP", 00:17:38.478 "adrfam": "IPv4", 00:17:38.478 "traddr": "10.0.0.2", 00:17:38.478 "trsvcid": "4420" 00:17:38.478 }, 00:17:38.478 "peer_address": { 00:17:38.478 "trtype": "TCP", 00:17:38.478 "adrfam": "IPv4", 00:17:38.478 "traddr": "10.0.0.1", 00:17:38.478 "trsvcid": "55482" 00:17:38.478 }, 00:17:38.478 "auth": { 00:17:38.478 "state": "completed", 00:17:38.478 "digest": "sha384", 00:17:38.478 "dhgroup": "ffdhe6144" 00:17:38.478 } 00:17:38.478 } 00:17:38.478 ]' 00:17:38.478 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.743 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.743 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.743 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.743 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.743 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.743 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.743 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.003 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:39.003 05:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.572 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.831 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.831 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.831 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.831 05:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.089 00:17:40.089 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.089 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.089 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.348 { 00:17:40.348 "cntlid": 89, 00:17:40.348 "qid": 0, 00:17:40.348 "state": "enabled", 00:17:40.348 "thread": "nvmf_tgt_poll_group_000", 00:17:40.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:40.348 "listen_address": { 00:17:40.348 "trtype": "TCP", 00:17:40.348 "adrfam": "IPv4", 00:17:40.348 "traddr": "10.0.0.2", 00:17:40.348 "trsvcid": "4420" 00:17:40.348 }, 00:17:40.348 "peer_address": { 00:17:40.348 "trtype": "TCP", 00:17:40.348 "adrfam": "IPv4", 00:17:40.348 "traddr": "10.0.0.1", 00:17:40.348 "trsvcid": "55510" 00:17:40.348 }, 00:17:40.348 "auth": { 00:17:40.348 "state": "completed", 00:17:40.348 "digest": "sha384", 00:17:40.348 "dhgroup": "ffdhe8192" 00:17:40.348 } 00:17:40.348 } 00:17:40.348 ]' 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.348 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.607 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.607 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.607 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.607 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.607 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.607 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:40.607 05:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.176 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.433 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:41.433 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.433 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.434 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.000 00:17:42.000 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.000 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.000 05:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.260 { 00:17:42.260 "cntlid": 91, 00:17:42.260 "qid": 0, 00:17:42.260 "state": "enabled", 00:17:42.260 "thread": "nvmf_tgt_poll_group_000", 00:17:42.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.260 "listen_address": { 00:17:42.260 "trtype": "TCP", 00:17:42.260 "adrfam": "IPv4", 00:17:42.260 "traddr": "10.0.0.2", 00:17:42.260 "trsvcid": "4420" 00:17:42.260 }, 00:17:42.260 "peer_address": { 00:17:42.260 "trtype": "TCP", 00:17:42.260 "adrfam": "IPv4", 00:17:42.260 "traddr": "10.0.0.1", 00:17:42.260 "trsvcid": "55538" 00:17:42.260 }, 00:17:42.260 "auth": { 00:17:42.260 "state": "completed", 00:17:42.260 "digest": "sha384", 00:17:42.260 "dhgroup": "ffdhe8192" 00:17:42.260 } 00:17:42.260 } 00:17:42.260 ]' 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.260 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.520 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:42.520 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.088 05:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.348 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.916 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.916 { 00:17:43.916 "cntlid": 93, 00:17:43.916 "qid": 0, 00:17:43.916 "state": "enabled", 00:17:43.916 "thread": "nvmf_tgt_poll_group_000", 00:17:43.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.916 "listen_address": { 00:17:43.916 "trtype": "TCP", 00:17:43.916 "adrfam": "IPv4", 00:17:43.916 "traddr": "10.0.0.2", 00:17:43.916 "trsvcid": "4420" 00:17:43.916 }, 00:17:43.916 "peer_address": { 00:17:43.916 "trtype": "TCP", 00:17:43.916 "adrfam": "IPv4", 00:17:43.916 "traddr": "10.0.0.1", 00:17:43.916 "trsvcid": "55576" 00:17:43.916 }, 00:17:43.916 "auth": { 00:17:43.916 "state": "completed", 00:17:43.916 "digest": "sha384", 00:17:43.916 "dhgroup": "ffdhe8192" 00:17:43.916 } 00:17:43.916 } 00:17:43.916 ]' 00:17:43.916 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.175 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.175 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.175 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.175 05:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.175 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.175 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.175 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.434 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:44.434 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.002 05:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.570 00:17:45.570 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.570 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.570 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.830 { 00:17:45.830 "cntlid": 95, 00:17:45.830 "qid": 0, 00:17:45.830 "state": "enabled", 00:17:45.830 "thread": "nvmf_tgt_poll_group_000", 00:17:45.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:45.830 "listen_address": { 00:17:45.830 "trtype": "TCP", 00:17:45.830 "adrfam": "IPv4", 00:17:45.830 "traddr": "10.0.0.2", 00:17:45.830 "trsvcid": "4420" 00:17:45.830 }, 00:17:45.830 "peer_address": { 00:17:45.830 "trtype": "TCP", 00:17:45.830 "adrfam": "IPv4", 00:17:45.830 "traddr": "10.0.0.1", 00:17:45.830 "trsvcid": "55608" 00:17:45.830 }, 00:17:45.830 "auth": { 00:17:45.830 "state": "completed", 00:17:45.830 "digest": "sha384", 00:17:45.830 "dhgroup": "ffdhe8192" 00:17:45.830 } 00:17:45.830 } 00:17:45.830 ]' 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.830 05:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.089 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:46.089 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:46.658 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.918 05:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.177 00:17:47.177 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.177 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.177 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.479 { 00:17:47.479 "cntlid": 97, 00:17:47.479 "qid": 0, 00:17:47.479 "state": "enabled", 00:17:47.479 "thread": "nvmf_tgt_poll_group_000", 00:17:47.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:47.479 "listen_address": { 00:17:47.479 "trtype": "TCP", 00:17:47.479 "adrfam": "IPv4", 00:17:47.479 "traddr": "10.0.0.2", 00:17:47.479 "trsvcid": "4420" 00:17:47.479 }, 00:17:47.479 "peer_address": { 00:17:47.479 "trtype": "TCP", 00:17:47.479 "adrfam": "IPv4", 00:17:47.479 "traddr": "10.0.0.1", 00:17:47.479 "trsvcid": "55630" 00:17:47.479 }, 00:17:47.479 "auth": { 00:17:47.479 "state": "completed", 00:17:47.479 "digest": "sha512", 00:17:47.479 "dhgroup": "null" 00:17:47.479 } 00:17:47.479 } 00:17:47.479 ]' 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.479 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.777 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:47.777 05:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.406 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.407 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.407 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.665 00:17:48.665 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.665 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.665 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.925 { 00:17:48.925 "cntlid": 99, 00:17:48.925 "qid": 0, 00:17:48.925 "state": "enabled", 00:17:48.925 "thread": "nvmf_tgt_poll_group_000", 00:17:48.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.925 "listen_address": { 00:17:48.925 "trtype": "TCP", 00:17:48.925 "adrfam": "IPv4", 00:17:48.925 "traddr": "10.0.0.2", 00:17:48.925 "trsvcid": "4420" 00:17:48.925 }, 00:17:48.925 "peer_address": { 00:17:48.925 "trtype": "TCP", 00:17:48.925 "adrfam": "IPv4", 00:17:48.925 "traddr": "10.0.0.1", 00:17:48.925 "trsvcid": "41538" 00:17:48.925 }, 00:17:48.925 "auth": { 00:17:48.925 "state": "completed", 00:17:48.925 "digest": "sha512", 00:17:48.925 "dhgroup": "null" 00:17:48.925 } 00:17:48.925 } 00:17:48.925 ]' 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.925 05:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.183 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:49.183 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.751 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.010 05:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.268 00:17:50.268 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.268 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.268 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.527 { 00:17:50.527 "cntlid": 101, 00:17:50.527 "qid": 0, 00:17:50.527 "state": "enabled", 00:17:50.527 "thread": "nvmf_tgt_poll_group_000", 00:17:50.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:50.527 "listen_address": { 00:17:50.527 "trtype": "TCP", 00:17:50.527 "adrfam": "IPv4", 00:17:50.527 "traddr": "10.0.0.2", 00:17:50.527 "trsvcid": "4420" 00:17:50.527 }, 00:17:50.527 "peer_address": { 00:17:50.527 "trtype": "TCP", 00:17:50.527 "adrfam": "IPv4", 00:17:50.527 "traddr": "10.0.0.1", 00:17:50.527 "trsvcid": "41570" 00:17:50.527 }, 00:17:50.527 "auth": { 00:17:50.527 "state": "completed", 00:17:50.527 "digest": "sha512", 00:17:50.527 "dhgroup": "null" 00:17:50.527 } 00:17:50.527 } 00:17:50.527 ]' 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.527 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.786 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:50.786 05:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.363 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.635 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.893 00:17:51.893 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.893 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.893 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.893 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.893 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.893 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.893 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.152 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.152 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.152 { 00:17:52.152 "cntlid": 103, 00:17:52.152 "qid": 0, 00:17:52.152 "state": "enabled", 00:17:52.152 "thread": "nvmf_tgt_poll_group_000", 00:17:52.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:52.152 "listen_address": { 00:17:52.152 "trtype": "TCP", 00:17:52.152 "adrfam": "IPv4", 00:17:52.152 "traddr": "10.0.0.2", 00:17:52.152 "trsvcid": "4420" 00:17:52.152 }, 00:17:52.152 "peer_address": { 00:17:52.152 "trtype": "TCP", 00:17:52.152 "adrfam": "IPv4", 00:17:52.152 "traddr": "10.0.0.1", 00:17:52.152 "trsvcid": "41600" 00:17:52.152 }, 00:17:52.152 "auth": { 00:17:52.152 "state": "completed", 00:17:52.152 "digest": "sha512", 00:17:52.152 "dhgroup": "null" 00:17:52.152 } 00:17:52.152 } 00:17:52.152 ]' 00:17:52.152 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.152 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.152 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.152 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:52.152 05:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.152 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.152 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.152 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.410 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:52.410 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.978 05:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.237 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.496 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.496 { 00:17:53.496 "cntlid": 105, 00:17:53.496 "qid": 0, 00:17:53.496 "state": "enabled", 00:17:53.496 "thread": "nvmf_tgt_poll_group_000", 00:17:53.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:53.496 "listen_address": { 00:17:53.496 "trtype": "TCP", 00:17:53.496 "adrfam": "IPv4", 00:17:53.496 "traddr": "10.0.0.2", 00:17:53.496 "trsvcid": "4420" 00:17:53.496 }, 00:17:53.496 "peer_address": { 00:17:53.496 "trtype": "TCP", 00:17:53.496 "adrfam": "IPv4", 00:17:53.496 "traddr": "10.0.0.1", 00:17:53.496 "trsvcid": "41622" 00:17:53.496 }, 00:17:53.496 "auth": { 00:17:53.496 "state": "completed", 00:17:53.496 "digest": "sha512", 00:17:53.496 "dhgroup": "ffdhe2048" 00:17:53.496 } 00:17:53.496 } 00:17:53.496 ]' 00:17:53.496 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.754 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.754 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.754 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.754 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.754 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.754 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.754 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.014 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:54.014 05:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.583 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.842 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.843 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.843 00:17:55.102 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.102 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.102 05:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.102 { 00:17:55.102 "cntlid": 107, 00:17:55.102 "qid": 0, 00:17:55.102 "state": "enabled", 00:17:55.102 "thread": "nvmf_tgt_poll_group_000", 00:17:55.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:55.102 "listen_address": { 00:17:55.102 "trtype": "TCP", 00:17:55.102 "adrfam": "IPv4", 00:17:55.102 "traddr": "10.0.0.2", 00:17:55.102 "trsvcid": "4420" 00:17:55.102 }, 00:17:55.102 "peer_address": { 00:17:55.102 "trtype": "TCP", 00:17:55.102 "adrfam": "IPv4", 00:17:55.102 "traddr": "10.0.0.1", 00:17:55.102 "trsvcid": "41652" 00:17:55.102 }, 00:17:55.102 "auth": { 00:17:55.102 "state": "completed", 00:17:55.102 "digest": "sha512", 00:17:55.102 "dhgroup": "ffdhe2048" 00:17:55.102 } 00:17:55.102 } 00:17:55.102 ]' 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.102 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.361 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.361 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.361 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.361 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.361 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.621 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:55.621 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.189 05:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.189 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.448 00:17:56.448 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.448 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.448 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.707 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.707 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.707 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.707 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.707 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.707 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.707 { 00:17:56.707 "cntlid": 109, 00:17:56.707 "qid": 0, 00:17:56.707 "state": "enabled", 00:17:56.707 "thread": "nvmf_tgt_poll_group_000", 00:17:56.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:56.707 "listen_address": { 00:17:56.707 "trtype": "TCP", 00:17:56.707 "adrfam": "IPv4", 00:17:56.707 "traddr": "10.0.0.2", 00:17:56.707 "trsvcid": "4420" 00:17:56.707 }, 00:17:56.707 "peer_address": { 00:17:56.707 "trtype": "TCP", 00:17:56.707 "adrfam": "IPv4", 00:17:56.707 "traddr": "10.0.0.1", 00:17:56.707 "trsvcid": "41672" 00:17:56.707 }, 00:17:56.707 "auth": { 00:17:56.707 "state": "completed", 00:17:56.707 "digest": "sha512", 00:17:56.707 "dhgroup": "ffdhe2048" 00:17:56.707 } 00:17:56.707 } 00:17:56.707 ]' 00:17:56.708 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.708 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.708 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.966 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.966 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.966 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.967 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.967 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.967 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:56.967 05:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.535 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.794 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.053 00:17:58.053 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.053 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.053 05:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.312 { 00:17:58.312 "cntlid": 111, 00:17:58.312 "qid": 0, 00:17:58.312 "state": "enabled", 00:17:58.312 "thread": "nvmf_tgt_poll_group_000", 00:17:58.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:58.312 "listen_address": { 00:17:58.312 "trtype": "TCP", 00:17:58.312 "adrfam": "IPv4", 00:17:58.312 "traddr": "10.0.0.2", 00:17:58.312 "trsvcid": "4420" 00:17:58.312 }, 00:17:58.312 "peer_address": { 00:17:58.312 "trtype": "TCP", 00:17:58.312 "adrfam": "IPv4", 00:17:58.312 "traddr": "10.0.0.1", 00:17:58.312 "trsvcid": "46686" 00:17:58.312 }, 00:17:58.312 "auth": { 00:17:58.312 "state": "completed", 00:17:58.312 "digest": "sha512", 00:17:58.312 "dhgroup": "ffdhe2048" 00:17:58.312 } 00:17:58.312 } 00:17:58.312 ]' 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.312 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.571 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:58.571 05:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.139 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.398 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:59.398 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.398 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.398 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:59.398 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.399 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.657 00:17:59.657 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.657 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.657 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.916 { 00:17:59.916 "cntlid": 113, 00:17:59.916 "qid": 0, 00:17:59.916 "state": "enabled", 00:17:59.916 "thread": "nvmf_tgt_poll_group_000", 00:17:59.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:59.916 "listen_address": { 00:17:59.916 "trtype": "TCP", 00:17:59.916 "adrfam": "IPv4", 00:17:59.916 "traddr": "10.0.0.2", 00:17:59.916 "trsvcid": "4420" 00:17:59.916 }, 00:17:59.916 "peer_address": { 00:17:59.916 "trtype": "TCP", 00:17:59.916 "adrfam": "IPv4", 00:17:59.916 "traddr": "10.0.0.1", 00:17:59.916 "trsvcid": "46714" 00:17:59.916 }, 00:17:59.916 "auth": { 00:17:59.916 "state": "completed", 00:17:59.916 "digest": "sha512", 00:17:59.916 "dhgroup": "ffdhe3072" 00:17:59.916 } 00:17:59.916 } 00:17:59.916 ]' 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.916 05:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.176 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:00.176 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.743 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.001 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:01.001 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.001 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.002 05:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.260 00:18:01.260 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.260 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.260 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.260 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.260 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.260 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.260 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.517 { 00:18:01.517 "cntlid": 115, 00:18:01.517 "qid": 0, 00:18:01.517 "state": "enabled", 00:18:01.517 "thread": "nvmf_tgt_poll_group_000", 00:18:01.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:01.517 "listen_address": { 00:18:01.517 "trtype": "TCP", 00:18:01.517 "adrfam": "IPv4", 00:18:01.517 "traddr": "10.0.0.2", 00:18:01.517 "trsvcid": "4420" 00:18:01.517 }, 00:18:01.517 "peer_address": { 00:18:01.517 "trtype": "TCP", 00:18:01.517 "adrfam": "IPv4", 00:18:01.517 "traddr": "10.0.0.1", 00:18:01.517 "trsvcid": "46728" 00:18:01.517 }, 00:18:01.517 "auth": { 00:18:01.517 "state": "completed", 00:18:01.517 "digest": "sha512", 00:18:01.517 "dhgroup": "ffdhe3072" 00:18:01.517 } 00:18:01.517 } 00:18:01.517 ]' 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.517 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.775 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:01.775 05:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.343 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.602 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.861 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.861 { 00:18:02.861 "cntlid": 117, 00:18:02.861 "qid": 0, 00:18:02.861 "state": "enabled", 00:18:02.861 "thread": "nvmf_tgt_poll_group_000", 00:18:02.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.861 "listen_address": { 00:18:02.861 "trtype": "TCP", 00:18:02.861 "adrfam": "IPv4", 00:18:02.861 "traddr": "10.0.0.2", 00:18:02.861 "trsvcid": "4420" 00:18:02.861 }, 00:18:02.861 "peer_address": { 00:18:02.861 "trtype": "TCP", 00:18:02.861 "adrfam": "IPv4", 00:18:02.861 "traddr": "10.0.0.1", 00:18:02.861 "trsvcid": "46756" 00:18:02.861 }, 00:18:02.861 "auth": { 00:18:02.861 "state": "completed", 00:18:02.861 "digest": "sha512", 00:18:02.861 "dhgroup": "ffdhe3072" 00:18:02.861 } 00:18:02.861 } 00:18:02.861 ]' 00:18:02.861 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.119 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.119 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.119 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.119 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.119 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.119 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.119 05:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.378 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:03.378 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:03.946 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.946 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.946 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.946 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.947 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.947 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.947 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:03.947 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.206 05:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.464 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.464 { 00:18:04.464 "cntlid": 119, 00:18:04.464 "qid": 0, 00:18:04.464 "state": "enabled", 00:18:04.464 "thread": "nvmf_tgt_poll_group_000", 00:18:04.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:04.464 "listen_address": { 00:18:04.464 "trtype": "TCP", 00:18:04.464 "adrfam": "IPv4", 00:18:04.464 "traddr": "10.0.0.2", 00:18:04.464 "trsvcid": "4420" 00:18:04.464 }, 00:18:04.464 "peer_address": { 00:18:04.464 "trtype": "TCP", 00:18:04.464 "adrfam": "IPv4", 00:18:04.464 "traddr": "10.0.0.1", 00:18:04.464 "trsvcid": "46780" 00:18:04.464 }, 00:18:04.464 "auth": { 00:18:04.464 "state": "completed", 00:18:04.464 "digest": "sha512", 00:18:04.464 "dhgroup": "ffdhe3072" 00:18:04.464 } 00:18:04.464 } 00:18:04.464 ]' 00:18:04.464 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.722 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.722 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.722 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.722 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.723 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.723 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.723 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.981 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:04.981 05:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.551 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.119 00:18:06.119 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.119 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.119 05:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.119 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.119 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.120 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.120 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.120 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.120 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.120 { 00:18:06.120 "cntlid": 121, 00:18:06.120 "qid": 0, 00:18:06.120 "state": "enabled", 00:18:06.120 "thread": "nvmf_tgt_poll_group_000", 00:18:06.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:06.120 "listen_address": { 00:18:06.120 "trtype": "TCP", 00:18:06.120 "adrfam": "IPv4", 00:18:06.120 "traddr": "10.0.0.2", 00:18:06.120 "trsvcid": "4420" 00:18:06.120 }, 00:18:06.120 "peer_address": { 00:18:06.120 "trtype": "TCP", 00:18:06.120 "adrfam": "IPv4", 00:18:06.120 "traddr": "10.0.0.1", 00:18:06.120 "trsvcid": "46808" 00:18:06.120 }, 00:18:06.120 "auth": { 00:18:06.120 "state": "completed", 00:18:06.120 "digest": "sha512", 00:18:06.120 "dhgroup": "ffdhe4096" 00:18:06.120 } 00:18:06.120 } 00:18:06.120 ]' 00:18:06.120 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.120 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.120 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.379 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.379 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.379 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.379 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.379 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.379 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:06.379 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:06.948 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.948 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.948 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.948 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.207 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.207 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.207 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.207 05:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.207 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.467 00:18:07.467 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.467 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.467 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.726 { 00:18:07.726 "cntlid": 123, 00:18:07.726 "qid": 0, 00:18:07.726 "state": "enabled", 00:18:07.726 "thread": "nvmf_tgt_poll_group_000", 00:18:07.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.726 "listen_address": { 00:18:07.726 "trtype": "TCP", 00:18:07.726 "adrfam": "IPv4", 00:18:07.726 "traddr": "10.0.0.2", 00:18:07.726 "trsvcid": "4420" 00:18:07.726 }, 00:18:07.726 "peer_address": { 00:18:07.726 "trtype": "TCP", 00:18:07.726 "adrfam": "IPv4", 00:18:07.726 "traddr": "10.0.0.1", 00:18:07.726 "trsvcid": "46846" 00:18:07.726 }, 00:18:07.726 "auth": { 00:18:07.726 "state": "completed", 00:18:07.726 "digest": "sha512", 00:18:07.726 "dhgroup": "ffdhe4096" 00:18:07.726 } 00:18:07.726 } 00:18:07.726 ]' 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.726 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.985 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.985 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.985 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.985 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.985 05:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.244 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:08.244 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.813 05:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.072 00:18:09.072 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.072 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.072 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.331 { 00:18:09.331 "cntlid": 125, 00:18:09.331 "qid": 0, 00:18:09.331 "state": "enabled", 00:18:09.331 "thread": "nvmf_tgt_poll_group_000", 00:18:09.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:09.331 "listen_address": { 00:18:09.331 "trtype": "TCP", 00:18:09.331 "adrfam": "IPv4", 00:18:09.331 "traddr": "10.0.0.2", 00:18:09.331 "trsvcid": "4420" 00:18:09.331 }, 00:18:09.331 "peer_address": { 00:18:09.331 "trtype": "TCP", 00:18:09.331 "adrfam": "IPv4", 00:18:09.331 "traddr": "10.0.0.1", 00:18:09.331 "trsvcid": "55186" 00:18:09.331 }, 00:18:09.331 "auth": { 00:18:09.331 "state": "completed", 00:18:09.331 "digest": "sha512", 00:18:09.331 "dhgroup": "ffdhe4096" 00:18:09.331 } 00:18:09.331 } 00:18:09.331 ]' 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.331 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.591 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.591 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.591 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.591 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.591 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.591 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:09.591 05:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:10.159 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.159 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:10.159 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.159 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.418 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.677 00:18:10.677 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.677 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.677 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.937 { 00:18:10.937 "cntlid": 127, 00:18:10.937 "qid": 0, 00:18:10.937 "state": "enabled", 00:18:10.937 "thread": "nvmf_tgt_poll_group_000", 00:18:10.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.937 "listen_address": { 00:18:10.937 "trtype": "TCP", 00:18:10.937 "adrfam": "IPv4", 00:18:10.937 "traddr": "10.0.0.2", 00:18:10.937 "trsvcid": "4420" 00:18:10.937 }, 00:18:10.937 "peer_address": { 00:18:10.937 "trtype": "TCP", 00:18:10.937 "adrfam": "IPv4", 00:18:10.937 "traddr": "10.0.0.1", 00:18:10.937 "trsvcid": "55220" 00:18:10.937 }, 00:18:10.937 "auth": { 00:18:10.937 "state": "completed", 00:18:10.937 "digest": "sha512", 00:18:10.937 "dhgroup": "ffdhe4096" 00:18:10.937 } 00:18:10.937 } 00:18:10.937 ]' 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.937 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.196 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.196 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.196 05:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.196 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:11.196 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.763 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.022 05:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.590 00:18:12.590 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.590 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.590 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.590 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.591 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.591 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.591 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.591 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.591 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.591 { 00:18:12.591 "cntlid": 129, 00:18:12.591 "qid": 0, 00:18:12.591 "state": "enabled", 00:18:12.591 "thread": "nvmf_tgt_poll_group_000", 00:18:12.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.591 "listen_address": { 00:18:12.591 "trtype": "TCP", 00:18:12.591 "adrfam": "IPv4", 00:18:12.591 "traddr": "10.0.0.2", 00:18:12.591 "trsvcid": "4420" 00:18:12.591 }, 00:18:12.591 "peer_address": { 00:18:12.591 "trtype": "TCP", 00:18:12.591 "adrfam": "IPv4", 00:18:12.591 "traddr": "10.0.0.1", 00:18:12.591 "trsvcid": "55252" 00:18:12.591 }, 00:18:12.591 "auth": { 00:18:12.591 "state": "completed", 00:18:12.591 "digest": "sha512", 00:18:12.591 "dhgroup": "ffdhe6144" 00:18:12.591 } 00:18:12.591 } 00:18:12.591 ]' 00:18:12.591 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.591 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.850 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.850 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.850 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.850 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.850 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.850 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.109 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:13.109 05:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:13.678 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.679 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.247 00:18:14.247 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.247 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.247 05:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.247 { 00:18:14.247 "cntlid": 131, 00:18:14.247 "qid": 0, 00:18:14.247 "state": "enabled", 00:18:14.247 "thread": "nvmf_tgt_poll_group_000", 00:18:14.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:14.247 "listen_address": { 00:18:14.247 "trtype": "TCP", 00:18:14.247 "adrfam": "IPv4", 00:18:14.247 "traddr": "10.0.0.2", 00:18:14.247 "trsvcid": "4420" 00:18:14.247 }, 00:18:14.247 "peer_address": { 00:18:14.247 "trtype": "TCP", 00:18:14.247 "adrfam": "IPv4", 00:18:14.247 "traddr": "10.0.0.1", 00:18:14.247 "trsvcid": "55262" 00:18:14.247 }, 00:18:14.247 "auth": { 00:18:14.247 "state": "completed", 00:18:14.247 "digest": "sha512", 00:18:14.247 "dhgroup": "ffdhe6144" 00:18:14.247 } 00:18:14.247 } 00:18:14.247 ]' 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.247 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.506 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.506 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.506 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.506 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.506 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.764 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:14.765 05:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.332 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.899 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.899 { 00:18:15.899 "cntlid": 133, 00:18:15.899 "qid": 0, 00:18:15.899 "state": "enabled", 00:18:15.899 "thread": "nvmf_tgt_poll_group_000", 00:18:15.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:15.899 "listen_address": { 00:18:15.899 "trtype": "TCP", 00:18:15.899 "adrfam": "IPv4", 00:18:15.899 "traddr": "10.0.0.2", 00:18:15.899 "trsvcid": "4420" 00:18:15.899 }, 00:18:15.899 "peer_address": { 00:18:15.899 "trtype": "TCP", 00:18:15.899 "adrfam": "IPv4", 00:18:15.899 "traddr": "10.0.0.1", 00:18:15.899 "trsvcid": "55282" 00:18:15.899 }, 00:18:15.899 "auth": { 00:18:15.899 "state": "completed", 00:18:15.899 "digest": "sha512", 00:18:15.899 "dhgroup": "ffdhe6144" 00:18:15.899 } 00:18:15.899 } 00:18:15.899 ]' 00:18:15.899 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.158 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.158 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.158 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.158 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.158 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.158 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.158 05:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.416 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:16.416 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.983 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.241 05:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.500 00:18:17.500 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.500 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.500 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.759 { 00:18:17.759 "cntlid": 135, 00:18:17.759 "qid": 0, 00:18:17.759 "state": "enabled", 00:18:17.759 "thread": "nvmf_tgt_poll_group_000", 00:18:17.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:17.759 "listen_address": { 00:18:17.759 "trtype": "TCP", 00:18:17.759 "adrfam": "IPv4", 00:18:17.759 "traddr": "10.0.0.2", 00:18:17.759 "trsvcid": "4420" 00:18:17.759 }, 00:18:17.759 "peer_address": { 00:18:17.759 "trtype": "TCP", 00:18:17.759 "adrfam": "IPv4", 00:18:17.759 "traddr": "10.0.0.1", 00:18:17.759 "trsvcid": "55318" 00:18:17.759 }, 00:18:17.759 "auth": { 00:18:17.759 "state": "completed", 00:18:17.759 "digest": "sha512", 00:18:17.759 "dhgroup": "ffdhe6144" 00:18:17.759 } 00:18:17.759 } 00:18:17.759 ]' 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.759 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.018 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:18.018 05:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.587 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.846 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.847 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.847 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.847 05:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.414 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.414 { 00:18:19.414 "cntlid": 137, 00:18:19.414 "qid": 0, 00:18:19.414 "state": "enabled", 00:18:19.414 "thread": "nvmf_tgt_poll_group_000", 00:18:19.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:19.414 "listen_address": { 00:18:19.414 "trtype": "TCP", 00:18:19.414 "adrfam": "IPv4", 00:18:19.414 "traddr": "10.0.0.2", 00:18:19.414 "trsvcid": "4420" 00:18:19.414 }, 00:18:19.414 "peer_address": { 00:18:19.414 "trtype": "TCP", 00:18:19.414 "adrfam": "IPv4", 00:18:19.414 "traddr": "10.0.0.1", 00:18:19.414 "trsvcid": "59606" 00:18:19.414 }, 00:18:19.414 "auth": { 00:18:19.414 "state": "completed", 00:18:19.414 "digest": "sha512", 00:18:19.414 "dhgroup": "ffdhe8192" 00:18:19.414 } 00:18:19.414 } 00:18:19.414 ]' 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.414 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.673 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.673 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.673 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.673 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:19.673 05:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.241 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.500 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:20.500 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.500 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.500 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.500 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.500 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.500 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.501 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.501 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.501 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.501 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.501 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.501 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.069 00:18:21.069 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.069 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.069 05:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.328 { 00:18:21.328 "cntlid": 139, 00:18:21.328 "qid": 0, 00:18:21.328 "state": "enabled", 00:18:21.328 "thread": "nvmf_tgt_poll_group_000", 00:18:21.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:21.328 "listen_address": { 00:18:21.328 "trtype": "TCP", 00:18:21.328 "adrfam": "IPv4", 00:18:21.328 "traddr": "10.0.0.2", 00:18:21.328 "trsvcid": "4420" 00:18:21.328 }, 00:18:21.328 "peer_address": { 00:18:21.328 "trtype": "TCP", 00:18:21.328 "adrfam": "IPv4", 00:18:21.328 "traddr": "10.0.0.1", 00:18:21.328 "trsvcid": "59642" 00:18:21.328 }, 00:18:21.328 "auth": { 00:18:21.328 "state": "completed", 00:18:21.328 "digest": "sha512", 00:18:21.328 "dhgroup": "ffdhe8192" 00:18:21.328 } 00:18:21.328 } 00:18:21.328 ]' 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.328 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.586 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:21.586 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: --dhchap-ctrl-secret DHHC-1:02:Nzc3ZTJiNjJkOGYzMWYwZjA1ZWE3NTI3NTQ4Mjk1Y2JiYmFlMWFjOWM4MzVlODhmPX7oEA==: 00:18:22.171 05:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.171 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.171 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.171 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.171 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.171 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.171 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.171 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.431 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.999 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.999 { 00:18:22.999 "cntlid": 141, 00:18:22.999 "qid": 0, 00:18:22.999 "state": "enabled", 00:18:22.999 "thread": "nvmf_tgt_poll_group_000", 00:18:22.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:22.999 "listen_address": { 00:18:22.999 "trtype": "TCP", 00:18:22.999 "adrfam": "IPv4", 00:18:22.999 "traddr": "10.0.0.2", 00:18:22.999 "trsvcid": "4420" 00:18:22.999 }, 00:18:22.999 "peer_address": { 00:18:22.999 "trtype": "TCP", 00:18:22.999 "adrfam": "IPv4", 00:18:22.999 "traddr": "10.0.0.1", 00:18:22.999 "trsvcid": "59674" 00:18:22.999 }, 00:18:22.999 "auth": { 00:18:22.999 "state": "completed", 00:18:22.999 "digest": "sha512", 00:18:22.999 "dhgroup": "ffdhe8192" 00:18:22.999 } 00:18:22.999 } 00:18:22.999 ]' 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.999 05:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.258 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.258 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.258 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.258 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.258 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.516 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:23.516 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:01:MDhhYmU2YjBkMGQ0NTc4NjVlYTk2ZDkyMTVhZmE2ZTTq9k+Z: 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.084 05:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.084 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:24.084 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.084 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.084 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.084 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.084 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.084 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:24.085 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.085 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.085 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.085 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.085 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.085 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.652 00:18:24.652 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.652 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.652 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.910 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.911 { 00:18:24.911 "cntlid": 143, 00:18:24.911 "qid": 0, 00:18:24.911 "state": "enabled", 00:18:24.911 "thread": "nvmf_tgt_poll_group_000", 00:18:24.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:24.911 "listen_address": { 00:18:24.911 "trtype": "TCP", 00:18:24.911 "adrfam": "IPv4", 00:18:24.911 "traddr": "10.0.0.2", 00:18:24.911 "trsvcid": "4420" 00:18:24.911 }, 00:18:24.911 "peer_address": { 00:18:24.911 "trtype": "TCP", 00:18:24.911 "adrfam": "IPv4", 00:18:24.911 "traddr": "10.0.0.1", 00:18:24.911 "trsvcid": "59704" 00:18:24.911 }, 00:18:24.911 "auth": { 00:18:24.911 "state": "completed", 00:18:24.911 "digest": "sha512", 00:18:24.911 "dhgroup": "ffdhe8192" 00:18:24.911 } 00:18:24.911 } 00:18:24.911 ]' 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.911 05:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.169 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:25.169 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.735 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.736 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.994 05:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.559 00:18:26.559 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.559 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.559 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.817 { 00:18:26.817 "cntlid": 145, 00:18:26.817 "qid": 0, 00:18:26.817 "state": "enabled", 00:18:26.817 "thread": "nvmf_tgt_poll_group_000", 00:18:26.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:26.817 "listen_address": { 00:18:26.817 "trtype": "TCP", 00:18:26.817 "adrfam": "IPv4", 00:18:26.817 "traddr": "10.0.0.2", 00:18:26.817 "trsvcid": "4420" 00:18:26.817 }, 00:18:26.817 "peer_address": { 00:18:26.817 "trtype": "TCP", 00:18:26.817 "adrfam": "IPv4", 00:18:26.817 "traddr": "10.0.0.1", 00:18:26.817 "trsvcid": "59716" 00:18:26.817 }, 00:18:26.817 "auth": { 00:18:26.817 "state": "completed", 00:18:26.817 "digest": "sha512", 00:18:26.817 "dhgroup": "ffdhe8192" 00:18:26.817 } 00:18:26.817 } 00:18:26.817 ]' 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.817 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.076 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:27.076 05:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NGViYjQ0N2UwYmRmYWY3MzU0ODc5YzEyMTU5OWNlNWZjYzEzMjE1MTFkYzMxZmY3igHAfw==: --dhchap-ctrl-secret DHHC-1:03:OWZkMjYwMDBmNjdjODBiM2U2MzExNjc0NmVmODFmMjkyMzBiN2Y1MzdmMmIyMTliN2E5NTk1YzMxNWNhMTNhZJkBmhA=: 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:27.644 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:28.211 request: 00:18:28.211 { 00:18:28.211 "name": "nvme0", 00:18:28.211 "trtype": "tcp", 00:18:28.211 "traddr": "10.0.0.2", 00:18:28.211 "adrfam": "ipv4", 00:18:28.211 "trsvcid": "4420", 00:18:28.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:28.211 "prchk_reftag": false, 00:18:28.211 "prchk_guard": false, 00:18:28.211 "hdgst": false, 00:18:28.211 "ddgst": false, 00:18:28.211 "dhchap_key": "key2", 00:18:28.211 "allow_unrecognized_csi": false, 00:18:28.211 "method": "bdev_nvme_attach_controller", 00:18:28.211 "req_id": 1 00:18:28.211 } 00:18:28.211 Got JSON-RPC error response 00:18:28.211 response: 00:18:28.211 { 00:18:28.211 "code": -5, 00:18:28.211 "message": "Input/output error" 00:18:28.211 } 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.211 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.212 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.212 05:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.212 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.212 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.212 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:28.471 request: 00:18:28.471 { 00:18:28.471 "name": "nvme0", 00:18:28.471 "trtype": "tcp", 00:18:28.471 "traddr": "10.0.0.2", 00:18:28.471 "adrfam": "ipv4", 00:18:28.471 "trsvcid": "4420", 00:18:28.471 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:28.471 "prchk_reftag": false, 00:18:28.471 "prchk_guard": false, 00:18:28.471 "hdgst": false, 00:18:28.471 "ddgst": false, 00:18:28.471 "dhchap_key": "key1", 00:18:28.471 "dhchap_ctrlr_key": "ckey2", 00:18:28.471 "allow_unrecognized_csi": false, 00:18:28.471 "method": "bdev_nvme_attach_controller", 00:18:28.471 "req_id": 1 00:18:28.471 } 00:18:28.471 Got JSON-RPC error response 00:18:28.471 response: 00:18:28.471 { 00:18:28.471 "code": -5, 00:18:28.471 "message": "Input/output error" 00:18:28.471 } 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.471 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.730 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.988 request: 00:18:28.988 { 00:18:28.988 "name": "nvme0", 00:18:28.988 "trtype": "tcp", 00:18:28.988 "traddr": "10.0.0.2", 00:18:28.988 "adrfam": "ipv4", 00:18:28.988 "trsvcid": "4420", 00:18:28.988 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:28.988 "prchk_reftag": false, 00:18:28.988 "prchk_guard": false, 00:18:28.988 "hdgst": false, 00:18:28.988 "ddgst": false, 00:18:28.988 "dhchap_key": "key1", 00:18:28.988 "dhchap_ctrlr_key": "ckey1", 00:18:28.988 "allow_unrecognized_csi": false, 00:18:28.988 "method": "bdev_nvme_attach_controller", 00:18:28.988 "req_id": 1 00:18:28.988 } 00:18:28.988 Got JSON-RPC error response 00:18:28.988 response: 00:18:28.988 { 00:18:28.988 "code": -5, 00:18:28.988 "message": "Input/output error" 00:18:28.988 } 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1743315 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1743315 ']' 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1743315 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743315 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743315' 00:18:28.988 killing process with pid 1743315 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1743315 00:18:28.988 05:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1743315 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1765298 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1765298 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1765298 ']' 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.248 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1765298 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1765298 ']' 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.507 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 null0 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.OTa 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.H7L ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.H7L 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.IRS 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.I6l ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.I6l 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qs8 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ZkH ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZkH 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.QUy 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.765 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.024 05:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.592 nvme0n1 00:18:30.592 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.592 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.592 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.850 { 00:18:30.850 "cntlid": 1, 00:18:30.850 "qid": 0, 00:18:30.850 "state": "enabled", 00:18:30.850 "thread": "nvmf_tgt_poll_group_000", 00:18:30.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:30.850 "listen_address": { 00:18:30.850 "trtype": "TCP", 00:18:30.850 "adrfam": "IPv4", 00:18:30.850 "traddr": "10.0.0.2", 00:18:30.850 "trsvcid": "4420" 00:18:30.850 }, 00:18:30.850 "peer_address": { 00:18:30.850 "trtype": "TCP", 00:18:30.850 "adrfam": "IPv4", 00:18:30.850 "traddr": "10.0.0.1", 00:18:30.850 "trsvcid": "47638" 00:18:30.850 }, 00:18:30.850 "auth": { 00:18:30.850 "state": "completed", 00:18:30.850 "digest": "sha512", 00:18:30.850 "dhgroup": "ffdhe8192" 00:18:30.850 } 00:18:30.850 } 00:18:30.850 ]' 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.850 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.109 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.109 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.109 05:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.109 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:31.109 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:31.676 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.934 05:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.192 request: 00:18:32.192 { 00:18:32.192 "name": "nvme0", 00:18:32.192 "trtype": "tcp", 00:18:32.192 "traddr": "10.0.0.2", 00:18:32.192 "adrfam": "ipv4", 00:18:32.192 "trsvcid": "4420", 00:18:32.192 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:32.192 "prchk_reftag": false, 00:18:32.192 "prchk_guard": false, 00:18:32.192 "hdgst": false, 00:18:32.192 "ddgst": false, 00:18:32.192 "dhchap_key": "key3", 00:18:32.192 "allow_unrecognized_csi": false, 00:18:32.192 "method": "bdev_nvme_attach_controller", 00:18:32.192 "req_id": 1 00:18:32.192 } 00:18:32.192 Got JSON-RPC error response 00:18:32.192 response: 00:18:32.192 { 00:18:32.192 "code": -5, 00:18:32.192 "message": "Input/output error" 00:18:32.192 } 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:32.192 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.450 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.708 request: 00:18:32.708 { 00:18:32.708 "name": "nvme0", 00:18:32.708 "trtype": "tcp", 00:18:32.708 "traddr": "10.0.0.2", 00:18:32.708 "adrfam": "ipv4", 00:18:32.708 "trsvcid": "4420", 00:18:32.708 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:32.708 "prchk_reftag": false, 00:18:32.708 "prchk_guard": false, 00:18:32.708 "hdgst": false, 00:18:32.708 "ddgst": false, 00:18:32.708 "dhchap_key": "key3", 00:18:32.708 "allow_unrecognized_csi": false, 00:18:32.708 "method": "bdev_nvme_attach_controller", 00:18:32.708 "req_id": 1 00:18:32.708 } 00:18:32.708 Got JSON-RPC error response 00:18:32.708 response: 00:18:32.708 { 00:18:32.708 "code": -5, 00:18:32.708 "message": "Input/output error" 00:18:32.708 } 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.708 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:32.709 05:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:33.273 request: 00:18:33.273 { 00:18:33.273 "name": "nvme0", 00:18:33.273 "trtype": "tcp", 00:18:33.273 "traddr": "10.0.0.2", 00:18:33.273 "adrfam": "ipv4", 00:18:33.273 "trsvcid": "4420", 00:18:33.273 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:33.273 "prchk_reftag": false, 00:18:33.273 "prchk_guard": false, 00:18:33.273 "hdgst": false, 00:18:33.273 "ddgst": false, 00:18:33.273 "dhchap_key": "key0", 00:18:33.273 "dhchap_ctrlr_key": "key1", 00:18:33.273 "allow_unrecognized_csi": false, 00:18:33.273 "method": "bdev_nvme_attach_controller", 00:18:33.273 "req_id": 1 00:18:33.273 } 00:18:33.273 Got JSON-RPC error response 00:18:33.273 response: 00:18:33.273 { 00:18:33.273 "code": -5, 00:18:33.273 "message": "Input/output error" 00:18:33.273 } 00:18:33.273 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:33.273 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.273 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.273 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.273 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:33.273 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:33.273 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:33.273 nvme0n1 00:18:33.530 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:33.530 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:33.530 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.530 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.530 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.531 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.788 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:33.788 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.788 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.788 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.788 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:33.788 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:33.788 05:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:34.721 nvme0n1 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:34.721 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.978 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.978 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:34.978 05:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: --dhchap-ctrl-secret DHHC-1:03:OWM2ZjgwOTY1ZWRiYmZiZmYxZGIwYTQ0MWY5ZmM0ODEyMzNkY2E2MDY1Yzg0YjkzMDZmODMzNTIxNzViZDEwNj1wm3c=: 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.543 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:35.802 05:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:36.061 request: 00:18:36.061 { 00:18:36.061 "name": "nvme0", 00:18:36.061 "trtype": "tcp", 00:18:36.061 "traddr": "10.0.0.2", 00:18:36.061 "adrfam": "ipv4", 00:18:36.061 "trsvcid": "4420", 00:18:36.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:36.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:36.061 "prchk_reftag": false, 00:18:36.061 "prchk_guard": false, 00:18:36.061 "hdgst": false, 00:18:36.061 "ddgst": false, 00:18:36.061 "dhchap_key": "key1", 00:18:36.061 "allow_unrecognized_csi": false, 00:18:36.061 "method": "bdev_nvme_attach_controller", 00:18:36.061 "req_id": 1 00:18:36.061 } 00:18:36.061 Got JSON-RPC error response 00:18:36.061 response: 00:18:36.061 { 00:18:36.061 "code": -5, 00:18:36.061 "message": "Input/output error" 00:18:36.061 } 00:18:36.061 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:36.061 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.061 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.061 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.061 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.061 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.061 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:36.996 nvme0n1 00:18:36.996 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:36.996 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:36.996 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.996 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.996 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.996 05:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.255 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.255 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.255 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.255 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.255 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:37.255 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:37.255 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:37.513 nvme0n1 00:18:37.513 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:37.513 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:37.513 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.772 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.772 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.772 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: '' 2s 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: ]] 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:N2YzZjI2OTg4MmQyNGRmM2RkNDRlMjEzMmE2MDczYWHi/kXV: 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:38.031 05:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: 2s 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: ]] 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzYzYjA0MmFlNDM5YzQ4NWI5N2Y2YTQ0YjU0ZjRmMzUwMmZiMWQxNTI0MzJiYjQysapgSQ==: 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:39.936 05:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:42.477 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:42.478 05:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:42.736 nvme0n1 00:18:42.736 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.736 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.736 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.736 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.736 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:42.736 05:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.303 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:43.303 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:43.303 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:43.562 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.820 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.820 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:43.820 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.820 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.820 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.820 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.820 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:43.821 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.821 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:43.821 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.821 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:43.821 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:43.821 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:43.821 05:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:44.388 request: 00:18:44.388 { 00:18:44.388 "name": "nvme0", 00:18:44.388 "dhchap_key": "key1", 00:18:44.388 "dhchap_ctrlr_key": "key3", 00:18:44.388 "method": "bdev_nvme_set_keys", 00:18:44.388 "req_id": 1 00:18:44.388 } 00:18:44.388 Got JSON-RPC error response 00:18:44.388 response: 00:18:44.388 { 00:18:44.388 "code": -13, 00:18:44.388 "message": "Permission denied" 00:18:44.388 } 00:18:44.388 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:44.388 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.388 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.388 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.388 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:44.388 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:44.388 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.646 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:44.646 05:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:45.582 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:45.582 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:45.582 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:45.878 05:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:46.549 nvme0n1 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:46.549 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:47.118 request: 00:18:47.118 { 00:18:47.118 "name": "nvme0", 00:18:47.118 "dhchap_key": "key2", 00:18:47.118 "dhchap_ctrlr_key": "key0", 00:18:47.118 "method": "bdev_nvme_set_keys", 00:18:47.118 "req_id": 1 00:18:47.118 } 00:18:47.118 Got JSON-RPC error response 00:18:47.118 response: 00:18:47.118 { 00:18:47.118 "code": -13, 00:18:47.118 "message": "Permission denied" 00:18:47.118 } 00:18:47.118 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:47.118 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:47.118 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:47.118 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:47.118 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:47.118 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:47.118 05:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.118 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:47.118 05:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1743435 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1743435 ']' 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1743435 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743435 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743435' 00:18:48.497 killing process with pid 1743435 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1743435 00:18:48.497 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1743435 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:48.757 rmmod nvme_tcp 00:18:48.757 rmmod nvme_fabrics 00:18:48.757 rmmod nvme_keyring 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1765298 ']' 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1765298 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1765298 ']' 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1765298 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765298 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765298' 00:18:48.757 killing process with pid 1765298 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1765298 00:18:48.757 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1765298 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.017 05:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.565 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:51.565 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.OTa /tmp/spdk.key-sha256.IRS /tmp/spdk.key-sha384.qs8 /tmp/spdk.key-sha512.QUy /tmp/spdk.key-sha512.H7L /tmp/spdk.key-sha384.I6l /tmp/spdk.key-sha256.ZkH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:51.565 00:18:51.565 real 2m31.922s 00:18:51.565 user 5m50.137s 00:18:51.565 sys 0m24.090s 00:18:51.565 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:51.565 05:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.565 ************************************ 00:18:51.565 END TEST nvmf_auth_target 00:18:51.565 ************************************ 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:51.565 ************************************ 00:18:51.565 START TEST nvmf_bdevio_no_huge 00:18:51.565 ************************************ 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:51.565 * Looking for test storage... 00:18:51.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.565 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:51.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.566 --rc genhtml_branch_coverage=1 00:18:51.566 --rc genhtml_function_coverage=1 00:18:51.566 --rc genhtml_legend=1 00:18:51.566 --rc geninfo_all_blocks=1 00:18:51.566 --rc geninfo_unexecuted_blocks=1 00:18:51.566 00:18:51.566 ' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:51.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.566 --rc genhtml_branch_coverage=1 00:18:51.566 --rc genhtml_function_coverage=1 00:18:51.566 --rc genhtml_legend=1 00:18:51.566 --rc geninfo_all_blocks=1 00:18:51.566 --rc geninfo_unexecuted_blocks=1 00:18:51.566 00:18:51.566 ' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:51.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.566 --rc genhtml_branch_coverage=1 00:18:51.566 --rc genhtml_function_coverage=1 00:18:51.566 --rc genhtml_legend=1 00:18:51.566 --rc geninfo_all_blocks=1 00:18:51.566 --rc geninfo_unexecuted_blocks=1 00:18:51.566 00:18:51.566 ' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:51.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.566 --rc genhtml_branch_coverage=1 00:18:51.566 --rc genhtml_function_coverage=1 00:18:51.566 --rc genhtml_legend=1 00:18:51.566 --rc geninfo_all_blocks=1 00:18:51.566 --rc geninfo_unexecuted_blocks=1 00:18:51.566 00:18:51.566 ' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:51.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.566 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.567 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:51.567 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:51.567 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:51.567 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:51.567 05:40:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.140 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:58.141 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:58.141 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:58.141 Found net devices under 0000:86:00.0: cvl_0_0 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:58.141 Found net devices under 0000:86:00.1: cvl_0_1 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:58.141 05:40:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:58.141 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:58.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:58.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:18:58.142 00:18:58.142 --- 10.0.0.2 ping statistics --- 00:18:58.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.142 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:58.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:58.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:18:58.142 00:18:58.142 --- 10.0.0.1 ping statistics --- 00:18:58.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:58.142 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1772101 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1772101 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1772101 ']' 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.142 05:40:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.142 [2024-11-27 05:40:45.254564] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:18:58.142 [2024-11-27 05:40:45.254612] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:58.142 [2024-11-27 05:40:45.337612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.142 [2024-11-27 05:40:45.384674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.142 [2024-11-27 05:40:45.384709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.142 [2024-11-27 05:40:45.384717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.142 [2024-11-27 05:40:45.384723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.142 [2024-11-27 05:40:45.384730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.142 [2024-11-27 05:40:45.385803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:58.142 [2024-11-27 05:40:45.385924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:58.142 [2024-11-27 05:40:45.386031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.142 [2024-11-27 05:40:45.386032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.142 [2024-11-27 05:40:46.127853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.142 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.402 Malloc0 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:58.402 [2024-11-27 05:40:46.172140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.402 { 00:18:58.402 "params": { 00:18:58.402 "name": "Nvme$subsystem", 00:18:58.402 "trtype": "$TEST_TRANSPORT", 00:18:58.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.402 "adrfam": "ipv4", 00:18:58.402 "trsvcid": "$NVMF_PORT", 00:18:58.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.402 "hdgst": ${hdgst:-false}, 00:18:58.402 "ddgst": ${ddgst:-false} 00:18:58.402 }, 00:18:58.402 "method": "bdev_nvme_attach_controller" 00:18:58.402 } 00:18:58.402 EOF 00:18:58.402 )") 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:58.402 05:40:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:58.402 "params": { 00:18:58.402 "name": "Nvme1", 00:18:58.402 "trtype": "tcp", 00:18:58.402 "traddr": "10.0.0.2", 00:18:58.402 "adrfam": "ipv4", 00:18:58.402 "trsvcid": "4420", 00:18:58.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.402 "hdgst": false, 00:18:58.402 "ddgst": false 00:18:58.402 }, 00:18:58.402 "method": "bdev_nvme_attach_controller" 00:18:58.402 }' 00:18:58.402 [2024-11-27 05:40:46.224310] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:18:58.402 [2024-11-27 05:40:46.224357] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1772348 ] 00:18:58.402 [2024-11-27 05:40:46.304519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:58.402 [2024-11-27 05:40:46.352397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.402 [2024-11-27 05:40:46.352504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.402 [2024-11-27 05:40:46.352505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.971 I/O targets: 00:18:58.971 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:58.971 00:18:58.971 00:18:58.971 CUnit - A unit testing framework for C - Version 2.1-3 00:18:58.971 http://cunit.sourceforge.net/ 00:18:58.971 00:18:58.971 00:18:58.972 Suite: bdevio tests on: Nvme1n1 00:18:58.972 Test: blockdev write read block ...passed 00:18:58.972 Test: blockdev write zeroes read block ...passed 00:18:58.972 Test: blockdev write zeroes read no split ...passed 00:18:58.972 Test: blockdev write zeroes read split ...passed 00:18:58.972 Test: blockdev write zeroes read split partial ...passed 00:18:58.972 Test: blockdev reset ...[2024-11-27 05:40:46.804299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:58.972 [2024-11-27 05:40:46.804370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4e8e0 (9): Bad file descriptor 00:18:58.972 [2024-11-27 05:40:46.860503] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:58.972 passed 00:18:58.972 Test: blockdev write read 8 blocks ...passed 00:18:58.972 Test: blockdev write read size > 128k ...passed 00:18:58.972 Test: blockdev write read invalid size ...passed 00:18:58.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:58.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:58.972 Test: blockdev write read max offset ...passed 00:18:59.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.231 Test: blockdev writev readv 8 blocks ...passed 00:18:59.231 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.231 Test: blockdev writev readv block ...passed 00:18:59.231 Test: blockdev writev readv size > 128k ...passed 00:18:59.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.231 Test: blockdev comparev and writev ...[2024-11-27 05:40:47.071515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.071548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.071562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.071570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.071819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.071829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.071840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.071847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.072079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.072088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.072099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.072106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.072354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.072364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.072375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:59.231 [2024-11-27 05:40:47.072381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:59.231 passed 00:18:59.231 Test: blockdev nvme passthru rw ...passed 00:18:59.231 Test: blockdev nvme passthru vendor specific ...[2024-11-27 05:40:47.154163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.231 [2024-11-27 05:40:47.154180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.154286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.231 [2024-11-27 05:40:47.154296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.154400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.231 [2024-11-27 05:40:47.154409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:59.231 [2024-11-27 05:40:47.154505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:59.231 [2024-11-27 05:40:47.154514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:59.231 passed 00:18:59.231 Test: blockdev nvme admin passthru ...passed 00:18:59.231 Test: blockdev copy ...passed 00:18:59.231 00:18:59.231 Run Summary: Type Total Ran Passed Failed Inactive 00:18:59.231 suites 1 1 n/a 0 0 00:18:59.231 tests 23 23 23 0 0 00:18:59.231 asserts 152 152 152 0 n/a 00:18:59.231 00:18:59.231 Elapsed time = 1.056 seconds 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:59.491 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.750 rmmod nvme_tcp 00:18:59.750 rmmod nvme_fabrics 00:18:59.750 rmmod nvme_keyring 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1772101 ']' 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1772101 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1772101 ']' 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1772101 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1772101 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1772101' 00:18:59.750 killing process with pid 1772101 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1772101 00:18:59.750 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1772101 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.009 05:40:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.551 00:19:02.551 real 0m10.950s 00:19:02.551 user 0m14.163s 00:19:02.551 sys 0m5.438s 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:02.551 ************************************ 00:19:02.551 END TEST nvmf_bdevio_no_huge 00:19:02.551 ************************************ 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.551 ************************************ 00:19:02.551 START TEST nvmf_tls 00:19:02.551 ************************************ 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:02.551 * Looking for test storage... 00:19:02.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.551 --rc genhtml_branch_coverage=1 00:19:02.551 --rc genhtml_function_coverage=1 00:19:02.551 --rc genhtml_legend=1 00:19:02.551 --rc geninfo_all_blocks=1 00:19:02.551 --rc geninfo_unexecuted_blocks=1 00:19:02.551 00:19:02.551 ' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.551 --rc genhtml_branch_coverage=1 00:19:02.551 --rc genhtml_function_coverage=1 00:19:02.551 --rc genhtml_legend=1 00:19:02.551 --rc geninfo_all_blocks=1 00:19:02.551 --rc geninfo_unexecuted_blocks=1 00:19:02.551 00:19:02.551 ' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.551 --rc genhtml_branch_coverage=1 00:19:02.551 --rc genhtml_function_coverage=1 00:19:02.551 --rc genhtml_legend=1 00:19:02.551 --rc geninfo_all_blocks=1 00:19:02.551 --rc geninfo_unexecuted_blocks=1 00:19:02.551 00:19:02.551 ' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.551 --rc genhtml_branch_coverage=1 00:19:02.551 --rc genhtml_function_coverage=1 00:19:02.551 --rc genhtml_legend=1 00:19:02.551 --rc geninfo_all_blocks=1 00:19:02.551 --rc geninfo_unexecuted_blocks=1 00:19:02.551 00:19:02.551 ' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.551 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.552 05:40:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:09.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:09.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:09.126 Found net devices under 0000:86:00.0: cvl_0_0 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:09.126 Found net devices under 0000:86:00.1: cvl_0_1 00:19:09.126 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.127 05:40:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:09.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:19:09.127 00:19:09.127 --- 10.0.0.2 ping statistics --- 00:19:09.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.127 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:19:09.127 00:19:09.127 --- 10.0.0.1 ping statistics --- 00:19:09.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.127 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1776111 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1776111 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1776111 ']' 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.127 [2024-11-27 05:40:56.277786] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:09.127 [2024-11-27 05:40:56.277838] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.127 [2024-11-27 05:40:56.356953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.127 [2024-11-27 05:40:56.397914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.127 [2024-11-27 05:40:56.397950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.127 [2024-11-27 05:40:56.397957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.127 [2024-11-27 05:40:56.397963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.127 [2024-11-27 05:40:56.397968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.127 [2024-11-27 05:40:56.398520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:09.127 true 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:09.127 05:40:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:09.127 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.127 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:09.387 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:09.387 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:09.387 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:09.645 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.645 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:09.645 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:09.645 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:09.645 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:09.645 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:09.904 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:09.904 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:09.904 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:10.164 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.164 05:40:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:10.423 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:10.423 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:10.423 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:10.423 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:10.423 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.STg93ETkDX 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.dBf86zKJWX 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:10.683 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.STg93ETkDX 00:19:10.684 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.dBf86zKJWX 00:19:10.684 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:10.943 05:40:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:11.202 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.STg93ETkDX 00:19:11.202 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.STg93ETkDX 00:19:11.202 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:11.460 [2024-11-27 05:40:59.285518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.460 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:11.718 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.718 [2024-11-27 05:40:59.666497] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.718 [2024-11-27 05:40:59.666740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.718 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:11.976 malloc0 00:19:11.976 05:40:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:12.235 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.STg93ETkDX 00:19:12.493 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.493 05:41:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.STg93ETkDX 00:19:24.706 Initializing NVMe Controllers 00:19:24.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:24.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:24.706 Initialization complete. Launching workers. 00:19:24.706 ======================================================== 00:19:24.706 Latency(us) 00:19:24.706 Device Information : IOPS MiB/s Average min max 00:19:24.706 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16718.18 65.31 3828.28 809.65 5766.90 00:19:24.706 ======================================================== 00:19:24.706 Total : 16718.18 65.31 3828.28 809.65 5766.90 00:19:24.706 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.STg93ETkDX 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.STg93ETkDX 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1778459 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1778459 /var/tmp/bdevperf.sock 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1778459 ']' 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.706 [2024-11-27 05:41:10.614798] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:24.706 [2024-11-27 05:41:10.614849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778459 ] 00:19:24.706 [2024-11-27 05:41:10.689185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.706 [2024-11-27 05:41:10.731092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:24.706 05:41:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.STg93ETkDX 00:19:24.706 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:24.706 [2024-11-27 05:41:11.200201] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.706 TLSTESTn1 00:19:24.706 05:41:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:24.706 Running I/O for 10 seconds... 00:19:25.642 5337.00 IOPS, 20.85 MiB/s [2024-11-27T04:41:14.583Z] 5511.00 IOPS, 21.53 MiB/s [2024-11-27T04:41:15.519Z] 5522.00 IOPS, 21.57 MiB/s [2024-11-27T04:41:16.455Z] 5516.00 IOPS, 21.55 MiB/s [2024-11-27T04:41:17.393Z] 5506.00 IOPS, 21.51 MiB/s [2024-11-27T04:41:18.772Z] 5494.33 IOPS, 21.46 MiB/s [2024-11-27T04:41:19.708Z] 5478.71 IOPS, 21.40 MiB/s [2024-11-27T04:41:20.644Z] 5462.62 IOPS, 21.34 MiB/s [2024-11-27T04:41:21.594Z] 5450.11 IOPS, 21.29 MiB/s [2024-11-27T04:41:21.594Z] 5463.90 IOPS, 21.34 MiB/s 00:19:33.591 Latency(us) 00:19:33.591 [2024-11-27T04:41:21.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.591 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:33.591 Verification LBA range: start 0x0 length 0x2000 00:19:33.591 TLSTESTn1 : 10.01 5469.72 21.37 0.00 0.00 23368.21 5180.46 24092.28 00:19:33.591 [2024-11-27T04:41:21.595Z] =================================================================================================================== 00:19:33.591 [2024-11-27T04:41:21.595Z] Total : 5469.72 21.37 0.00 0.00 23368.21 5180.46 24092.28 00:19:33.591 { 00:19:33.591 "results": [ 00:19:33.591 { 00:19:33.591 "job": "TLSTESTn1", 00:19:33.591 "core_mask": "0x4", 00:19:33.591 "workload": "verify", 00:19:33.591 "status": "finished", 00:19:33.591 "verify_range": { 00:19:33.591 "start": 0, 00:19:33.591 "length": 8192 00:19:33.591 }, 00:19:33.591 "queue_depth": 128, 00:19:33.591 "io_size": 4096, 00:19:33.591 "runtime": 10.01258, 00:19:33.591 "iops": 5469.719093380528, 00:19:33.591 "mibps": 21.366090208517686, 00:19:33.591 "io_failed": 0, 00:19:33.592 "io_timeout": 0, 00:19:33.592 "avg_latency_us": 23368.208843095214, 00:19:33.592 "min_latency_us": 5180.464761904762, 00:19:33.592 "max_latency_us": 24092.281904761905 00:19:33.592 } 00:19:33.592 ], 00:19:33.592 "core_count": 1 00:19:33.592 } 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1778459 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1778459 ']' 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1778459 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1778459 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1778459' 00:19:33.592 killing process with pid 1778459 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1778459 00:19:33.592 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.592 00:19:33.592 Latency(us) 00:19:33.592 [2024-11-27T04:41:21.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.592 [2024-11-27T04:41:21.596Z] =================================================================================================================== 00:19:33.592 [2024-11-27T04:41:21.596Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.592 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1778459 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dBf86zKJWX 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dBf86zKJWX 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dBf86zKJWX 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dBf86zKJWX 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1780296 00:19:33.855 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1780296 /var/tmp/bdevperf.sock 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1780296 ']' 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.856 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.856 [2024-11-27 05:41:21.686573] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:33.856 [2024-11-27 05:41:21.686618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780296 ] 00:19:33.856 [2024-11-27 05:41:21.754217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.856 [2024-11-27 05:41:21.790666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.114 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.114 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.114 05:41:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dBf86zKJWX 00:19:34.114 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.374 [2024-11-27 05:41:22.242676] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.374 [2024-11-27 05:41:22.254041] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:34.374 [2024-11-27 05:41:22.254154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11be1a0 (107): Transport endpoint is not connected 00:19:34.374 [2024-11-27 05:41:22.255137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11be1a0 (9): Bad file descriptor 00:19:34.374 [2024-11-27 05:41:22.256139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:34.374 [2024-11-27 05:41:22.256153] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:34.374 [2024-11-27 05:41:22.256160] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:34.374 [2024-11-27 05:41:22.256168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:34.374 request: 00:19:34.374 { 00:19:34.374 "name": "TLSTEST", 00:19:34.374 "trtype": "tcp", 00:19:34.374 "traddr": "10.0.0.2", 00:19:34.374 "adrfam": "ipv4", 00:19:34.374 "trsvcid": "4420", 00:19:34.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.374 "prchk_reftag": false, 00:19:34.374 "prchk_guard": false, 00:19:34.374 "hdgst": false, 00:19:34.374 "ddgst": false, 00:19:34.374 "psk": "key0", 00:19:34.374 "allow_unrecognized_csi": false, 00:19:34.374 "method": "bdev_nvme_attach_controller", 00:19:34.374 "req_id": 1 00:19:34.374 } 00:19:34.374 Got JSON-RPC error response 00:19:34.374 response: 00:19:34.374 { 00:19:34.374 "code": -5, 00:19:34.374 "message": "Input/output error" 00:19:34.374 } 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1780296 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1780296 ']' 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1780296 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780296 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780296' 00:19:34.374 killing process with pid 1780296 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1780296 00:19:34.374 Received shutdown signal, test time was about 10.000000 seconds 00:19:34.374 00:19:34.374 Latency(us) 00:19:34.374 [2024-11-27T04:41:22.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.374 [2024-11-27T04:41:22.378Z] =================================================================================================================== 00:19:34.374 [2024-11-27T04:41:22.378Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:34.374 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1780296 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.STg93ETkDX 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.STg93ETkDX 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.STg93ETkDX 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.STg93ETkDX 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1780415 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1780415 /var/tmp/bdevperf.sock 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1780415 ']' 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.634 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.634 [2024-11-27 05:41:22.537037] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:34.634 [2024-11-27 05:41:22.537086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780415 ] 00:19:34.634 [2024-11-27 05:41:22.613560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.893 [2024-11-27 05:41:22.660450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.893 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.893 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.893 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.STg93ETkDX 00:19:35.152 05:41:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:35.152 [2024-11-27 05:41:23.108269] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.152 [2024-11-27 05:41:23.116151] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:35.152 [2024-11-27 05:41:23.116171] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:35.152 [2024-11-27 05:41:23.116210] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:35.152 [2024-11-27 05:41:23.116724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6f1a0 (107): Transport endpoint is not connected 00:19:35.152 [2024-11-27 05:41:23.117717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6f1a0 (9): Bad file descriptor 00:19:35.152 [2024-11-27 05:41:23.118719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:35.152 [2024-11-27 05:41:23.118729] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:35.152 [2024-11-27 05:41:23.118737] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:35.152 [2024-11-27 05:41:23.118745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:35.152 request: 00:19:35.152 { 00:19:35.152 "name": "TLSTEST", 00:19:35.152 "trtype": "tcp", 00:19:35.152 "traddr": "10.0.0.2", 00:19:35.152 "adrfam": "ipv4", 00:19:35.152 "trsvcid": "4420", 00:19:35.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.152 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:35.152 "prchk_reftag": false, 00:19:35.152 "prchk_guard": false, 00:19:35.152 "hdgst": false, 00:19:35.152 "ddgst": false, 00:19:35.152 "psk": "key0", 00:19:35.152 "allow_unrecognized_csi": false, 00:19:35.152 "method": "bdev_nvme_attach_controller", 00:19:35.152 "req_id": 1 00:19:35.152 } 00:19:35.152 Got JSON-RPC error response 00:19:35.152 response: 00:19:35.152 { 00:19:35.152 "code": -5, 00:19:35.152 "message": "Input/output error" 00:19:35.152 } 00:19:35.152 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1780415 00:19:35.152 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1780415 ']' 00:19:35.152 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1780415 00:19:35.152 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:35.152 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.152 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780415 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780415' 00:19:35.412 killing process with pid 1780415 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1780415 00:19:35.412 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.412 00:19:35.412 Latency(us) 00:19:35.412 [2024-11-27T04:41:23.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.412 [2024-11-27T04:41:23.416Z] =================================================================================================================== 00:19:35.412 [2024-11-27T04:41:23.416Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1780415 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.STg93ETkDX 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.STg93ETkDX 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.STg93ETkDX 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.STg93ETkDX 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1780550 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1780550 /var/tmp/bdevperf.sock 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1780550 ']' 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.412 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.413 [2024-11-27 05:41:23.395595] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:35.413 [2024-11-27 05:41:23.395641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780550 ] 00:19:35.671 [2024-11-27 05:41:23.468888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.671 [2024-11-27 05:41:23.505414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.671 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.671 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:35.671 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.STg93ETkDX 00:19:35.931 05:41:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.191 [2024-11-27 05:41:23.976967] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.191 [2024-11-27 05:41:23.988390] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:36.191 [2024-11-27 05:41:23.988410] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:36.191 [2024-11-27 05:41:23.988433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:36.191 [2024-11-27 05:41:23.989286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e81a0 (107): Transport endpoint is not connected 00:19:36.191 [2024-11-27 05:41:23.990279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e81a0 (9): Bad file descriptor 00:19:36.191 [2024-11-27 05:41:23.991281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:36.191 [2024-11-27 05:41:23.991294] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:36.191 [2024-11-27 05:41:23.991301] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:36.191 [2024-11-27 05:41:23.991310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:36.191 request: 00:19:36.191 { 00:19:36.191 "name": "TLSTEST", 00:19:36.191 "trtype": "tcp", 00:19:36.191 "traddr": "10.0.0.2", 00:19:36.191 "adrfam": "ipv4", 00:19:36.191 "trsvcid": "4420", 00:19:36.191 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:36.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.191 "prchk_reftag": false, 00:19:36.191 "prchk_guard": false, 00:19:36.191 "hdgst": false, 00:19:36.191 "ddgst": false, 00:19:36.191 "psk": "key0", 00:19:36.191 "allow_unrecognized_csi": false, 00:19:36.191 "method": "bdev_nvme_attach_controller", 00:19:36.191 "req_id": 1 00:19:36.191 } 00:19:36.191 Got JSON-RPC error response 00:19:36.191 response: 00:19:36.191 { 00:19:36.191 "code": -5, 00:19:36.191 "message": "Input/output error" 00:19:36.191 } 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1780550 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1780550 ']' 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1780550 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780550 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780550' 00:19:36.191 killing process with pid 1780550 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1780550 00:19:36.191 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.191 00:19:36.191 Latency(us) 00:19:36.191 [2024-11-27T04:41:24.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.191 [2024-11-27T04:41:24.195Z] =================================================================================================================== 00:19:36.191 [2024-11-27T04:41:24.195Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.191 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1780550 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1780778 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1780778 /var/tmp/bdevperf.sock 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1780778 ']' 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.450 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.450 [2024-11-27 05:41:24.275611] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:36.450 [2024-11-27 05:41:24.275659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780778 ] 00:19:36.450 [2024-11-27 05:41:24.343589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.450 [2024-11-27 05:41:24.381650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.710 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.710 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.710 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:36.710 [2024-11-27 05:41:24.648252] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:36.710 [2024-11-27 05:41:24.648286] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:36.710 request: 00:19:36.710 { 00:19:36.710 "name": "key0", 00:19:36.710 "path": "", 00:19:36.710 "method": "keyring_file_add_key", 00:19:36.710 "req_id": 1 00:19:36.710 } 00:19:36.710 Got JSON-RPC error response 00:19:36.710 response: 00:19:36.710 { 00:19:36.710 "code": -1, 00:19:36.710 "message": "Operation not permitted" 00:19:36.710 } 00:19:36.710 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:36.970 [2024-11-27 05:41:24.840847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.970 [2024-11-27 05:41:24.840874] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:36.970 request: 00:19:36.970 { 00:19:36.970 "name": "TLSTEST", 00:19:36.970 "trtype": "tcp", 00:19:36.970 "traddr": "10.0.0.2", 00:19:36.970 "adrfam": "ipv4", 00:19:36.970 "trsvcid": "4420", 00:19:36.970 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.970 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.970 "prchk_reftag": false, 00:19:36.970 "prchk_guard": false, 00:19:36.970 "hdgst": false, 00:19:36.970 "ddgst": false, 00:19:36.970 "psk": "key0", 00:19:36.970 "allow_unrecognized_csi": false, 00:19:36.970 "method": "bdev_nvme_attach_controller", 00:19:36.970 "req_id": 1 00:19:36.970 } 00:19:36.970 Got JSON-RPC error response 00:19:36.970 response: 00:19:36.970 { 00:19:36.970 "code": -126, 00:19:36.970 "message": "Required key not available" 00:19:36.970 } 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1780778 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1780778 ']' 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1780778 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780778 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780778' 00:19:36.970 killing process with pid 1780778 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1780778 00:19:36.970 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.970 00:19:36.970 Latency(us) 00:19:36.970 [2024-11-27T04:41:24.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.970 [2024-11-27T04:41:24.974Z] =================================================================================================================== 00:19:36.970 [2024-11-27T04:41:24.974Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.970 05:41:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1780778 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1776111 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1776111 ']' 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1776111 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1776111 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1776111' 00:19:37.229 killing process with pid 1776111 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1776111 00:19:37.229 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1776111 00:19:37.487 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.AfYjBaYj1B 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.AfYjBaYj1B 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1780929 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1780929 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1780929 ']' 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.488 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.488 [2024-11-27 05:41:25.393516] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:37.488 [2024-11-27 05:41:25.393561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.488 [2024-11-27 05:41:25.472817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.746 [2024-11-27 05:41:25.511251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.746 [2024-11-27 05:41:25.511287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.746 [2024-11-27 05:41:25.511293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.746 [2024-11-27 05:41:25.511300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.746 [2024-11-27 05:41:25.511306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.746 [2024-11-27 05:41:25.511879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.AfYjBaYj1B 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AfYjBaYj1B 00:19:37.746 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.005 [2024-11-27 05:41:25.824497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.005 05:41:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:38.262 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:38.262 [2024-11-27 05:41:26.193446] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.262 [2024-11-27 05:41:26.193678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.262 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:38.519 malloc0 00:19:38.519 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.776 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:19:38.776 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfYjBaYj1B 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AfYjBaYj1B 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1781282 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1781282 /var/tmp/bdevperf.sock 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1781282 ']' 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.035 05:41:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.035 [2024-11-27 05:41:26.976001] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:39.035 [2024-11-27 05:41:26.976049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781282 ] 00:19:39.293 [2024-11-27 05:41:27.050963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.293 [2024-11-27 05:41:27.093357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.293 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.293 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.293 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:19:39.553 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.553 [2024-11-27 05:41:27.554410] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.811 TLSTESTn1 00:19:39.811 05:41:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:39.811 Running I/O for 10 seconds... 00:19:42.123 5426.00 IOPS, 21.20 MiB/s [2024-11-27T04:41:31.064Z] 5527.00 IOPS, 21.59 MiB/s [2024-11-27T04:41:32.000Z] 5533.00 IOPS, 21.61 MiB/s [2024-11-27T04:41:32.936Z] 5590.75 IOPS, 21.84 MiB/s [2024-11-27T04:41:33.872Z] 5578.40 IOPS, 21.79 MiB/s [2024-11-27T04:41:34.810Z] 5591.67 IOPS, 21.84 MiB/s [2024-11-27T04:41:36.186Z] 5453.43 IOPS, 21.30 MiB/s [2024-11-27T04:41:37.123Z] 5437.62 IOPS, 21.24 MiB/s [2024-11-27T04:41:38.059Z] 5415.44 IOPS, 21.15 MiB/s [2024-11-27T04:41:38.059Z] 5398.70 IOPS, 21.09 MiB/s 00:19:50.055 Latency(us) 00:19:50.055 [2024-11-27T04:41:38.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.055 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:50.055 Verification LBA range: start 0x0 length 0x2000 00:19:50.055 TLSTESTn1 : 10.02 5402.42 21.10 0.00 0.00 23657.90 4712.35 30333.81 00:19:50.055 [2024-11-27T04:41:38.059Z] =================================================================================================================== 00:19:50.055 [2024-11-27T04:41:38.059Z] Total : 5402.42 21.10 0.00 0.00 23657.90 4712.35 30333.81 00:19:50.055 { 00:19:50.055 "results": [ 00:19:50.055 { 00:19:50.055 "job": "TLSTESTn1", 00:19:50.055 "core_mask": "0x4", 00:19:50.055 "workload": "verify", 00:19:50.055 "status": "finished", 00:19:50.055 "verify_range": { 00:19:50.055 "start": 0, 00:19:50.055 "length": 8192 00:19:50.055 }, 00:19:50.055 "queue_depth": 128, 00:19:50.055 "io_size": 4096, 00:19:50.055 "runtime": 10.016626, 00:19:50.055 "iops": 5402.417939933067, 00:19:50.055 "mibps": 21.103195077863543, 00:19:50.055 "io_failed": 0, 00:19:50.055 "io_timeout": 0, 00:19:50.055 "avg_latency_us": 23657.898306678846, 00:19:50.055 "min_latency_us": 4712.350476190476, 00:19:50.055 "max_latency_us": 30333.805714285714 00:19:50.055 } 00:19:50.055 ], 00:19:50.055 "core_count": 1 00:19:50.055 } 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1781282 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1781282 ']' 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1781282 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1781282 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1781282' 00:19:50.055 killing process with pid 1781282 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1781282 00:19:50.055 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.055 00:19:50.055 Latency(us) 00:19:50.055 [2024-11-27T04:41:38.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.055 [2024-11-27T04:41:38.059Z] =================================================================================================================== 00:19:50.055 [2024-11-27T04:41:38.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.055 05:41:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1781282 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.AfYjBaYj1B 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfYjBaYj1B 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfYjBaYj1B 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AfYjBaYj1B 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.AfYjBaYj1B 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1782959 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1782959 /var/tmp/bdevperf.sock 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1782959 ']' 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.055 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.315 [2024-11-27 05:41:38.062253] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:50.315 [2024-11-27 05:41:38.062302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1782959 ] 00:19:50.315 [2024-11-27 05:41:38.135444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.315 [2024-11-27 05:41:38.174849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.315 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.315 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.315 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:19:50.574 [2024-11-27 05:41:38.446088] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AfYjBaYj1B': 0100666 00:19:50.574 [2024-11-27 05:41:38.446113] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:50.574 request: 00:19:50.574 { 00:19:50.574 "name": "key0", 00:19:50.574 "path": "/tmp/tmp.AfYjBaYj1B", 00:19:50.574 "method": "keyring_file_add_key", 00:19:50.574 "req_id": 1 00:19:50.574 } 00:19:50.574 Got JSON-RPC error response 00:19:50.574 response: 00:19:50.574 { 00:19:50.574 "code": -1, 00:19:50.574 "message": "Operation not permitted" 00:19:50.574 } 00:19:50.574 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.833 [2024-11-27 05:41:38.622627] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.833 [2024-11-27 05:41:38.622654] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:50.833 request: 00:19:50.833 { 00:19:50.833 "name": "TLSTEST", 00:19:50.833 "trtype": "tcp", 00:19:50.833 "traddr": "10.0.0.2", 00:19:50.833 "adrfam": "ipv4", 00:19:50.833 "trsvcid": "4420", 00:19:50.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.833 "prchk_reftag": false, 00:19:50.833 "prchk_guard": false, 00:19:50.833 "hdgst": false, 00:19:50.833 "ddgst": false, 00:19:50.833 "psk": "key0", 00:19:50.833 "allow_unrecognized_csi": false, 00:19:50.833 "method": "bdev_nvme_attach_controller", 00:19:50.833 "req_id": 1 00:19:50.833 } 00:19:50.833 Got JSON-RPC error response 00:19:50.833 response: 00:19:50.833 { 00:19:50.833 "code": -126, 00:19:50.833 "message": "Required key not available" 00:19:50.833 } 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1782959 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1782959 ']' 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1782959 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1782959 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1782959' 00:19:50.833 killing process with pid 1782959 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1782959 00:19:50.833 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.833 00:19:50.833 Latency(us) 00:19:50.833 [2024-11-27T04:41:38.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.833 [2024-11-27T04:41:38.837Z] =================================================================================================================== 00:19:50.833 [2024-11-27T04:41:38.837Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.833 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1782959 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1780929 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1780929 ']' 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1780929 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780929 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780929' 00:19:51.093 killing process with pid 1780929 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1780929 00:19:51.093 05:41:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1780929 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1783146 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1783146 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1783146 ']' 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.093 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.364 [2024-11-27 05:41:39.133272] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:51.364 [2024-11-27 05:41:39.133323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.364 [2024-11-27 05:41:39.206861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.364 [2024-11-27 05:41:39.245265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.364 [2024-11-27 05:41:39.245300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.364 [2024-11-27 05:41:39.245307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.364 [2024-11-27 05:41:39.245313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.364 [2024-11-27 05:41:39.245318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.364 [2024-11-27 05:41:39.245910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.364 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.365 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.365 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.365 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.365 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.AfYjBaYj1B 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.AfYjBaYj1B 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.AfYjBaYj1B 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AfYjBaYj1B 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.635 [2024-11-27 05:41:39.560741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.635 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:51.894 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:52.154 [2024-11-27 05:41:39.929702] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.154 [2024-11-27 05:41:39.929955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.154 05:41:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:52.154 malloc0 00:19:52.154 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:52.414 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:19:52.673 [2024-11-27 05:41:40.503310] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AfYjBaYj1B': 0100666 00:19:52.673 [2024-11-27 05:41:40.503338] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:52.673 request: 00:19:52.673 { 00:19:52.673 "name": "key0", 00:19:52.673 "path": "/tmp/tmp.AfYjBaYj1B", 00:19:52.673 "method": "keyring_file_add_key", 00:19:52.673 "req_id": 1 00:19:52.673 } 00:19:52.673 Got JSON-RPC error response 00:19:52.673 response: 00:19:52.673 { 00:19:52.673 "code": -1, 00:19:52.673 "message": "Operation not permitted" 00:19:52.673 } 00:19:52.673 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:52.932 [2024-11-27 05:41:40.679785] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:52.932 [2024-11-27 05:41:40.679820] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:52.932 request: 00:19:52.932 { 00:19:52.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.932 "host": "nqn.2016-06.io.spdk:host1", 00:19:52.932 "psk": "key0", 00:19:52.932 "method": "nvmf_subsystem_add_host", 00:19:52.932 "req_id": 1 00:19:52.932 } 00:19:52.932 Got JSON-RPC error response 00:19:52.932 response: 00:19:52.932 { 00:19:52.932 "code": -32603, 00:19:52.932 "message": "Internal error" 00:19:52.932 } 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1783146 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1783146 ']' 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1783146 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1783146 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1783146' 00:19:52.932 killing process with pid 1783146 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1783146 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1783146 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.AfYjBaYj1B 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1783513 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1783513 00:19:52.932 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1783513 ']' 00:19:52.933 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.933 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.933 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.933 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.933 05:41:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.192 [2024-11-27 05:41:40.970055] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:53.192 [2024-11-27 05:41:40.970103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.193 [2024-11-27 05:41:41.047433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.193 [2024-11-27 05:41:41.087653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.193 [2024-11-27 05:41:41.087694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.193 [2024-11-27 05:41:41.087701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.193 [2024-11-27 05:41:41.087707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.193 [2024-11-27 05:41:41.087712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.193 [2024-11-27 05:41:41.088282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.193 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.193 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.193 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.193 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.193 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.452 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.452 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.AfYjBaYj1B 00:19:53.452 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AfYjBaYj1B 00:19:53.452 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.452 [2024-11-27 05:41:41.387314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.452 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.710 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.969 [2024-11-27 05:41:41.756284] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.969 [2024-11-27 05:41:41.756524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.969 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.969 malloc0 00:19:53.969 05:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.228 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1783841 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1783841 /var/tmp/bdevperf.sock 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1783841 ']' 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.488 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.748 [2024-11-27 05:41:42.521813] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:54.748 [2024-11-27 05:41:42.521866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783841 ] 00:19:54.748 [2024-11-27 05:41:42.595974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.748 [2024-11-27 05:41:42.637212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.748 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.748 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:54.748 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:19:55.008 05:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.267 [2024-11-27 05:41:43.068239] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.267 TLSTESTn1 00:19:55.267 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:55.526 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:55.526 "subsystems": [ 00:19:55.526 { 00:19:55.526 "subsystem": "keyring", 00:19:55.526 "config": [ 00:19:55.526 { 00:19:55.526 "method": "keyring_file_add_key", 00:19:55.526 "params": { 00:19:55.526 "name": "key0", 00:19:55.526 "path": "/tmp/tmp.AfYjBaYj1B" 00:19:55.526 } 00:19:55.526 } 00:19:55.526 ] 00:19:55.526 }, 00:19:55.526 { 00:19:55.526 "subsystem": "iobuf", 00:19:55.526 "config": [ 00:19:55.526 { 00:19:55.526 "method": "iobuf_set_options", 00:19:55.526 "params": { 00:19:55.526 "small_pool_count": 8192, 00:19:55.526 "large_pool_count": 1024, 00:19:55.526 "small_bufsize": 8192, 00:19:55.526 "large_bufsize": 135168, 00:19:55.526 "enable_numa": false 00:19:55.526 } 00:19:55.526 } 00:19:55.526 ] 00:19:55.526 }, 00:19:55.526 { 00:19:55.526 "subsystem": "sock", 00:19:55.526 "config": [ 00:19:55.526 { 00:19:55.526 "method": "sock_set_default_impl", 00:19:55.526 "params": { 00:19:55.526 "impl_name": "posix" 00:19:55.526 } 00:19:55.526 }, 00:19:55.526 { 00:19:55.526 "method": "sock_impl_set_options", 00:19:55.526 "params": { 00:19:55.526 "impl_name": "ssl", 00:19:55.526 "recv_buf_size": 4096, 00:19:55.526 "send_buf_size": 4096, 00:19:55.526 "enable_recv_pipe": true, 00:19:55.526 "enable_quickack": false, 00:19:55.526 "enable_placement_id": 0, 00:19:55.526 "enable_zerocopy_send_server": true, 00:19:55.526 "enable_zerocopy_send_client": false, 00:19:55.527 "zerocopy_threshold": 0, 00:19:55.527 "tls_version": 0, 00:19:55.527 "enable_ktls": false 00:19:55.527 } 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "method": "sock_impl_set_options", 00:19:55.527 "params": { 00:19:55.527 "impl_name": "posix", 00:19:55.527 "recv_buf_size": 2097152, 00:19:55.527 "send_buf_size": 2097152, 00:19:55.527 "enable_recv_pipe": true, 00:19:55.527 "enable_quickack": false, 00:19:55.527 "enable_placement_id": 0, 00:19:55.527 "enable_zerocopy_send_server": true, 00:19:55.527 "enable_zerocopy_send_client": false, 00:19:55.527 "zerocopy_threshold": 0, 00:19:55.527 "tls_version": 0, 00:19:55.527 "enable_ktls": false 00:19:55.527 } 00:19:55.527 } 00:19:55.527 ] 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "subsystem": "vmd", 00:19:55.527 "config": [] 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "subsystem": "accel", 00:19:55.527 "config": [ 00:19:55.527 { 00:19:55.527 "method": "accel_set_options", 00:19:55.527 "params": { 00:19:55.527 "small_cache_size": 128, 00:19:55.527 "large_cache_size": 16, 00:19:55.527 "task_count": 2048, 00:19:55.527 "sequence_count": 2048, 00:19:55.527 "buf_count": 2048 00:19:55.527 } 00:19:55.527 } 00:19:55.527 ] 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "subsystem": "bdev", 00:19:55.527 "config": [ 00:19:55.527 { 00:19:55.527 "method": "bdev_set_options", 00:19:55.527 "params": { 00:19:55.527 "bdev_io_pool_size": 65535, 00:19:55.527 "bdev_io_cache_size": 256, 00:19:55.527 "bdev_auto_examine": true, 00:19:55.527 "iobuf_small_cache_size": 128, 00:19:55.527 "iobuf_large_cache_size": 16 00:19:55.527 } 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "method": "bdev_raid_set_options", 00:19:55.527 "params": { 00:19:55.527 "process_window_size_kb": 1024, 00:19:55.527 "process_max_bandwidth_mb_sec": 0 00:19:55.527 } 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "method": "bdev_iscsi_set_options", 00:19:55.527 "params": { 00:19:55.527 "timeout_sec": 30 00:19:55.527 } 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "method": "bdev_nvme_set_options", 00:19:55.527 "params": { 00:19:55.527 "action_on_timeout": "none", 00:19:55.527 "timeout_us": 0, 00:19:55.527 "timeout_admin_us": 0, 00:19:55.527 "keep_alive_timeout_ms": 10000, 00:19:55.527 "arbitration_burst": 0, 00:19:55.527 "low_priority_weight": 0, 00:19:55.527 "medium_priority_weight": 0, 00:19:55.527 "high_priority_weight": 0, 00:19:55.527 "nvme_adminq_poll_period_us": 10000, 00:19:55.527 "nvme_ioq_poll_period_us": 0, 00:19:55.527 "io_queue_requests": 0, 00:19:55.527 "delay_cmd_submit": true, 00:19:55.527 "transport_retry_count": 4, 00:19:55.527 "bdev_retry_count": 3, 00:19:55.527 "transport_ack_timeout": 0, 00:19:55.527 "ctrlr_loss_timeout_sec": 0, 00:19:55.527 "reconnect_delay_sec": 0, 00:19:55.527 "fast_io_fail_timeout_sec": 0, 00:19:55.527 "disable_auto_failback": false, 00:19:55.527 "generate_uuids": false, 00:19:55.527 "transport_tos": 0, 00:19:55.527 "nvme_error_stat": false, 00:19:55.527 "rdma_srq_size": 0, 00:19:55.527 "io_path_stat": false, 00:19:55.527 "allow_accel_sequence": false, 00:19:55.527 "rdma_max_cq_size": 0, 00:19:55.527 "rdma_cm_event_timeout_ms": 0, 00:19:55.527 "dhchap_digests": [ 00:19:55.527 "sha256", 00:19:55.527 "sha384", 00:19:55.527 "sha512" 00:19:55.527 ], 00:19:55.527 "dhchap_dhgroups": [ 00:19:55.527 "null", 00:19:55.527 "ffdhe2048", 00:19:55.527 "ffdhe3072", 00:19:55.527 "ffdhe4096", 00:19:55.527 "ffdhe6144", 00:19:55.527 "ffdhe8192" 00:19:55.527 ] 00:19:55.527 } 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "method": "bdev_nvme_set_hotplug", 00:19:55.527 "params": { 00:19:55.527 "period_us": 100000, 00:19:55.527 "enable": false 00:19:55.527 } 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "method": "bdev_malloc_create", 00:19:55.527 "params": { 00:19:55.527 "name": "malloc0", 00:19:55.527 "num_blocks": 8192, 00:19:55.527 "block_size": 4096, 00:19:55.527 "physical_block_size": 4096, 00:19:55.527 "uuid": "69dc340a-480a-42e0-9f9f-fd1be8666a25", 00:19:55.527 "optimal_io_boundary": 0, 00:19:55.527 "md_size": 0, 00:19:55.527 "dif_type": 0, 00:19:55.527 "dif_is_head_of_md": false, 00:19:55.527 "dif_pi_format": 0 00:19:55.527 } 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "method": "bdev_wait_for_examine" 00:19:55.527 } 00:19:55.527 ] 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "subsystem": "nbd", 00:19:55.527 "config": [] 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "subsystem": "scheduler", 00:19:55.527 "config": [ 00:19:55.527 { 00:19:55.527 "method": "framework_set_scheduler", 00:19:55.527 "params": { 00:19:55.527 "name": "static" 00:19:55.527 } 00:19:55.527 } 00:19:55.527 ] 00:19:55.527 }, 00:19:55.527 { 00:19:55.527 "subsystem": "nvmf", 00:19:55.527 "config": [ 00:19:55.527 { 00:19:55.527 "method": "nvmf_set_config", 00:19:55.527 "params": { 00:19:55.527 "discovery_filter": "match_any", 00:19:55.527 "admin_cmd_passthru": { 00:19:55.527 "identify_ctrlr": false 00:19:55.527 }, 00:19:55.527 "dhchap_digests": [ 00:19:55.527 "sha256", 00:19:55.527 "sha384", 00:19:55.527 "sha512" 00:19:55.527 ], 00:19:55.527 "dhchap_dhgroups": [ 00:19:55.528 "null", 00:19:55.528 "ffdhe2048", 00:19:55.528 "ffdhe3072", 00:19:55.528 "ffdhe4096", 00:19:55.528 "ffdhe6144", 00:19:55.528 "ffdhe8192" 00:19:55.528 ] 00:19:55.528 } 00:19:55.528 }, 00:19:55.528 { 00:19:55.528 "method": "nvmf_set_max_subsystems", 00:19:55.528 "params": { 00:19:55.528 "max_subsystems": 1024 00:19:55.528 } 00:19:55.528 }, 00:19:55.528 { 00:19:55.528 "method": "nvmf_set_crdt", 00:19:55.528 "params": { 00:19:55.528 "crdt1": 0, 00:19:55.528 "crdt2": 0, 00:19:55.528 "crdt3": 0 00:19:55.528 } 00:19:55.528 }, 00:19:55.528 { 00:19:55.528 "method": "nvmf_create_transport", 00:19:55.528 "params": { 00:19:55.528 "trtype": "TCP", 00:19:55.528 "max_queue_depth": 128, 00:19:55.528 "max_io_qpairs_per_ctrlr": 127, 00:19:55.528 "in_capsule_data_size": 4096, 00:19:55.528 "max_io_size": 131072, 00:19:55.528 "io_unit_size": 131072, 00:19:55.528 "max_aq_depth": 128, 00:19:55.528 "num_shared_buffers": 511, 00:19:55.528 "buf_cache_size": 4294967295, 00:19:55.528 "dif_insert_or_strip": false, 00:19:55.528 "zcopy": false, 00:19:55.528 "c2h_success": false, 00:19:55.528 "sock_priority": 0, 00:19:55.528 "abort_timeout_sec": 1, 00:19:55.528 "ack_timeout": 0, 00:19:55.528 "data_wr_pool_size": 0 00:19:55.528 } 00:19:55.528 }, 00:19:55.528 { 00:19:55.528 "method": "nvmf_create_subsystem", 00:19:55.528 "params": { 00:19:55.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.528 "allow_any_host": false, 00:19:55.528 "serial_number": "SPDK00000000000001", 00:19:55.528 "model_number": "SPDK bdev Controller", 00:19:55.528 "max_namespaces": 10, 00:19:55.528 "min_cntlid": 1, 00:19:55.528 "max_cntlid": 65519, 00:19:55.528 "ana_reporting": false 00:19:55.528 } 00:19:55.528 }, 00:19:55.528 { 00:19:55.528 "method": "nvmf_subsystem_add_host", 00:19:55.528 "params": { 00:19:55.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.528 "host": "nqn.2016-06.io.spdk:host1", 00:19:55.528 "psk": "key0" 00:19:55.528 } 00:19:55.528 }, 00:19:55.528 { 00:19:55.528 "method": "nvmf_subsystem_add_ns", 00:19:55.528 "params": { 00:19:55.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.528 "namespace": { 00:19:55.528 "nsid": 1, 00:19:55.528 "bdev_name": "malloc0", 00:19:55.528 "nguid": "69DC340A480A42E09F9FFD1BE8666A25", 00:19:55.528 "uuid": "69dc340a-480a-42e0-9f9f-fd1be8666a25", 00:19:55.528 "no_auto_visible": false 00:19:55.528 } 00:19:55.528 } 00:19:55.528 }, 00:19:55.528 { 00:19:55.528 "method": "nvmf_subsystem_add_listener", 00:19:55.528 "params": { 00:19:55.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.528 "listen_address": { 00:19:55.528 "trtype": "TCP", 00:19:55.528 "adrfam": "IPv4", 00:19:55.528 "traddr": "10.0.0.2", 00:19:55.528 "trsvcid": "4420" 00:19:55.528 }, 00:19:55.528 "secure_channel": true 00:19:55.528 } 00:19:55.528 } 00:19:55.528 ] 00:19:55.528 } 00:19:55.528 ] 00:19:55.528 }' 00:19:55.528 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:55.787 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:55.787 "subsystems": [ 00:19:55.787 { 00:19:55.787 "subsystem": "keyring", 00:19:55.787 "config": [ 00:19:55.787 { 00:19:55.787 "method": "keyring_file_add_key", 00:19:55.787 "params": { 00:19:55.787 "name": "key0", 00:19:55.787 "path": "/tmp/tmp.AfYjBaYj1B" 00:19:55.787 } 00:19:55.787 } 00:19:55.787 ] 00:19:55.787 }, 00:19:55.787 { 00:19:55.787 "subsystem": "iobuf", 00:19:55.787 "config": [ 00:19:55.787 { 00:19:55.787 "method": "iobuf_set_options", 00:19:55.787 "params": { 00:19:55.787 "small_pool_count": 8192, 00:19:55.787 "large_pool_count": 1024, 00:19:55.787 "small_bufsize": 8192, 00:19:55.787 "large_bufsize": 135168, 00:19:55.787 "enable_numa": false 00:19:55.787 } 00:19:55.787 } 00:19:55.787 ] 00:19:55.787 }, 00:19:55.787 { 00:19:55.787 "subsystem": "sock", 00:19:55.787 "config": [ 00:19:55.787 { 00:19:55.787 "method": "sock_set_default_impl", 00:19:55.787 "params": { 00:19:55.787 "impl_name": "posix" 00:19:55.787 } 00:19:55.787 }, 00:19:55.787 { 00:19:55.787 "method": "sock_impl_set_options", 00:19:55.787 "params": { 00:19:55.787 "impl_name": "ssl", 00:19:55.787 "recv_buf_size": 4096, 00:19:55.787 "send_buf_size": 4096, 00:19:55.787 "enable_recv_pipe": true, 00:19:55.787 "enable_quickack": false, 00:19:55.787 "enable_placement_id": 0, 00:19:55.787 "enable_zerocopy_send_server": true, 00:19:55.787 "enable_zerocopy_send_client": false, 00:19:55.787 "zerocopy_threshold": 0, 00:19:55.787 "tls_version": 0, 00:19:55.787 "enable_ktls": false 00:19:55.787 } 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "method": "sock_impl_set_options", 00:19:55.788 "params": { 00:19:55.788 "impl_name": "posix", 00:19:55.788 "recv_buf_size": 2097152, 00:19:55.788 "send_buf_size": 2097152, 00:19:55.788 "enable_recv_pipe": true, 00:19:55.788 "enable_quickack": false, 00:19:55.788 "enable_placement_id": 0, 00:19:55.788 "enable_zerocopy_send_server": true, 00:19:55.788 "enable_zerocopy_send_client": false, 00:19:55.788 "zerocopy_threshold": 0, 00:19:55.788 "tls_version": 0, 00:19:55.788 "enable_ktls": false 00:19:55.788 } 00:19:55.788 } 00:19:55.788 ] 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "subsystem": "vmd", 00:19:55.788 "config": [] 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "subsystem": "accel", 00:19:55.788 "config": [ 00:19:55.788 { 00:19:55.788 "method": "accel_set_options", 00:19:55.788 "params": { 00:19:55.788 "small_cache_size": 128, 00:19:55.788 "large_cache_size": 16, 00:19:55.788 "task_count": 2048, 00:19:55.788 "sequence_count": 2048, 00:19:55.788 "buf_count": 2048 00:19:55.788 } 00:19:55.788 } 00:19:55.788 ] 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "subsystem": "bdev", 00:19:55.788 "config": [ 00:19:55.788 { 00:19:55.788 "method": "bdev_set_options", 00:19:55.788 "params": { 00:19:55.788 "bdev_io_pool_size": 65535, 00:19:55.788 "bdev_io_cache_size": 256, 00:19:55.788 "bdev_auto_examine": true, 00:19:55.788 "iobuf_small_cache_size": 128, 00:19:55.788 "iobuf_large_cache_size": 16 00:19:55.788 } 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "method": "bdev_raid_set_options", 00:19:55.788 "params": { 00:19:55.788 "process_window_size_kb": 1024, 00:19:55.788 "process_max_bandwidth_mb_sec": 0 00:19:55.788 } 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "method": "bdev_iscsi_set_options", 00:19:55.788 "params": { 00:19:55.788 "timeout_sec": 30 00:19:55.788 } 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "method": "bdev_nvme_set_options", 00:19:55.788 "params": { 00:19:55.788 "action_on_timeout": "none", 00:19:55.788 "timeout_us": 0, 00:19:55.788 "timeout_admin_us": 0, 00:19:55.788 "keep_alive_timeout_ms": 10000, 00:19:55.788 "arbitration_burst": 0, 00:19:55.788 "low_priority_weight": 0, 00:19:55.788 "medium_priority_weight": 0, 00:19:55.788 "high_priority_weight": 0, 00:19:55.788 "nvme_adminq_poll_period_us": 10000, 00:19:55.788 "nvme_ioq_poll_period_us": 0, 00:19:55.788 "io_queue_requests": 512, 00:19:55.788 "delay_cmd_submit": true, 00:19:55.788 "transport_retry_count": 4, 00:19:55.788 "bdev_retry_count": 3, 00:19:55.788 "transport_ack_timeout": 0, 00:19:55.788 "ctrlr_loss_timeout_sec": 0, 00:19:55.788 "reconnect_delay_sec": 0, 00:19:55.788 "fast_io_fail_timeout_sec": 0, 00:19:55.788 "disable_auto_failback": false, 00:19:55.788 "generate_uuids": false, 00:19:55.788 "transport_tos": 0, 00:19:55.788 "nvme_error_stat": false, 00:19:55.788 "rdma_srq_size": 0, 00:19:55.788 "io_path_stat": false, 00:19:55.788 "allow_accel_sequence": false, 00:19:55.788 "rdma_max_cq_size": 0, 00:19:55.788 "rdma_cm_event_timeout_ms": 0, 00:19:55.788 "dhchap_digests": [ 00:19:55.788 "sha256", 00:19:55.788 "sha384", 00:19:55.788 "sha512" 00:19:55.788 ], 00:19:55.788 "dhchap_dhgroups": [ 00:19:55.788 "null", 00:19:55.788 "ffdhe2048", 00:19:55.788 "ffdhe3072", 00:19:55.788 "ffdhe4096", 00:19:55.788 "ffdhe6144", 00:19:55.788 "ffdhe8192" 00:19:55.788 ] 00:19:55.788 } 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "method": "bdev_nvme_attach_controller", 00:19:55.788 "params": { 00:19:55.788 "name": "TLSTEST", 00:19:55.788 "trtype": "TCP", 00:19:55.788 "adrfam": "IPv4", 00:19:55.788 "traddr": "10.0.0.2", 00:19:55.788 "trsvcid": "4420", 00:19:55.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.788 "prchk_reftag": false, 00:19:55.788 "prchk_guard": false, 00:19:55.788 "ctrlr_loss_timeout_sec": 0, 00:19:55.788 "reconnect_delay_sec": 0, 00:19:55.788 "fast_io_fail_timeout_sec": 0, 00:19:55.788 "psk": "key0", 00:19:55.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.788 "hdgst": false, 00:19:55.788 "ddgst": false, 00:19:55.788 "multipath": "multipath" 00:19:55.788 } 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "method": "bdev_nvme_set_hotplug", 00:19:55.788 "params": { 00:19:55.788 "period_us": 100000, 00:19:55.788 "enable": false 00:19:55.788 } 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "method": "bdev_wait_for_examine" 00:19:55.788 } 00:19:55.788 ] 00:19:55.788 }, 00:19:55.788 { 00:19:55.788 "subsystem": "nbd", 00:19:55.788 "config": [] 00:19:55.788 } 00:19:55.788 ] 00:19:55.788 }' 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1783841 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1783841 ']' 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1783841 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1783841 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1783841' 00:19:55.788 killing process with pid 1783841 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1783841 00:19:55.788 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.788 00:19:55.788 Latency(us) 00:19:55.788 [2024-11-27T04:41:43.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.788 [2024-11-27T04:41:43.792Z] =================================================================================================================== 00:19:55.788 [2024-11-27T04:41:43.792Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:55.788 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1783841 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1783513 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1783513 ']' 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1783513 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1783513 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1783513' 00:19:56.048 killing process with pid 1783513 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1783513 00:19:56.048 05:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1783513 00:19:56.308 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:56.308 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:56.308 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:56.308 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:56.308 "subsystems": [ 00:19:56.308 { 00:19:56.308 "subsystem": "keyring", 00:19:56.308 "config": [ 00:19:56.308 { 00:19:56.308 "method": "keyring_file_add_key", 00:19:56.308 "params": { 00:19:56.308 "name": "key0", 00:19:56.308 "path": "/tmp/tmp.AfYjBaYj1B" 00:19:56.308 } 00:19:56.308 } 00:19:56.308 ] 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "subsystem": "iobuf", 00:19:56.308 "config": [ 00:19:56.308 { 00:19:56.308 "method": "iobuf_set_options", 00:19:56.308 "params": { 00:19:56.308 "small_pool_count": 8192, 00:19:56.308 "large_pool_count": 1024, 00:19:56.308 "small_bufsize": 8192, 00:19:56.308 "large_bufsize": 135168, 00:19:56.308 "enable_numa": false 00:19:56.308 } 00:19:56.308 } 00:19:56.308 ] 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "subsystem": "sock", 00:19:56.308 "config": [ 00:19:56.308 { 00:19:56.308 "method": "sock_set_default_impl", 00:19:56.308 "params": { 00:19:56.308 "impl_name": "posix" 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "sock_impl_set_options", 00:19:56.308 "params": { 00:19:56.308 "impl_name": "ssl", 00:19:56.308 "recv_buf_size": 4096, 00:19:56.308 "send_buf_size": 4096, 00:19:56.308 "enable_recv_pipe": true, 00:19:56.308 "enable_quickack": false, 00:19:56.308 "enable_placement_id": 0, 00:19:56.308 "enable_zerocopy_send_server": true, 00:19:56.308 "enable_zerocopy_send_client": false, 00:19:56.308 "zerocopy_threshold": 0, 00:19:56.308 "tls_version": 0, 00:19:56.308 "enable_ktls": false 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "sock_impl_set_options", 00:19:56.308 "params": { 00:19:56.308 "impl_name": "posix", 00:19:56.308 "recv_buf_size": 2097152, 00:19:56.308 "send_buf_size": 2097152, 00:19:56.308 "enable_recv_pipe": true, 00:19:56.308 "enable_quickack": false, 00:19:56.308 "enable_placement_id": 0, 00:19:56.308 "enable_zerocopy_send_server": true, 00:19:56.308 "enable_zerocopy_send_client": false, 00:19:56.308 "zerocopy_threshold": 0, 00:19:56.308 "tls_version": 0, 00:19:56.308 "enable_ktls": false 00:19:56.308 } 00:19:56.308 } 00:19:56.308 ] 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "subsystem": "vmd", 00:19:56.308 "config": [] 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "subsystem": "accel", 00:19:56.308 "config": [ 00:19:56.308 { 00:19:56.308 "method": "accel_set_options", 00:19:56.308 "params": { 00:19:56.308 "small_cache_size": 128, 00:19:56.308 "large_cache_size": 16, 00:19:56.308 "task_count": 2048, 00:19:56.308 "sequence_count": 2048, 00:19:56.308 "buf_count": 2048 00:19:56.308 } 00:19:56.308 } 00:19:56.308 ] 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "subsystem": "bdev", 00:19:56.308 "config": [ 00:19:56.308 { 00:19:56.308 "method": "bdev_set_options", 00:19:56.308 "params": { 00:19:56.308 "bdev_io_pool_size": 65535, 00:19:56.308 "bdev_io_cache_size": 256, 00:19:56.308 "bdev_auto_examine": true, 00:19:56.308 "iobuf_small_cache_size": 128, 00:19:56.308 "iobuf_large_cache_size": 16 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "bdev_raid_set_options", 00:19:56.308 "params": { 00:19:56.308 "process_window_size_kb": 1024, 00:19:56.308 "process_max_bandwidth_mb_sec": 0 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "bdev_iscsi_set_options", 00:19:56.308 "params": { 00:19:56.308 "timeout_sec": 30 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "bdev_nvme_set_options", 00:19:56.308 "params": { 00:19:56.308 "action_on_timeout": "none", 00:19:56.308 "timeout_us": 0, 00:19:56.308 "timeout_admin_us": 0, 00:19:56.308 "keep_alive_timeout_ms": 10000, 00:19:56.308 "arbitration_burst": 0, 00:19:56.308 "low_priority_weight": 0, 00:19:56.308 "medium_priority_weight": 0, 00:19:56.308 "high_priority_weight": 0, 00:19:56.308 "nvme_adminq_poll_period_us": 10000, 00:19:56.308 "nvme_ioq_poll_period_us": 0, 00:19:56.308 "io_queue_requests": 0, 00:19:56.308 "delay_cmd_submit": true, 00:19:56.308 "transport_retry_count": 4, 00:19:56.308 "bdev_retry_count": 3, 00:19:56.308 "transport_ack_timeout": 0, 00:19:56.308 "ctrlr_loss_timeout_sec": 0, 00:19:56.308 "reconnect_delay_sec": 0, 00:19:56.308 "fast_io_fail_timeout_sec": 0, 00:19:56.308 "disable_auto_failback": false, 00:19:56.308 "generate_uuids": false, 00:19:56.308 "transport_tos": 0, 00:19:56.308 "nvme_error_stat": false, 00:19:56.308 "rdma_srq_size": 0, 00:19:56.308 "io_path_stat": false, 00:19:56.308 "allow_accel_sequence": false, 00:19:56.308 "rdma_max_cq_size": 0, 00:19:56.308 "rdma_cm_event_timeout_ms": 0, 00:19:56.308 "dhchap_digests": [ 00:19:56.308 "sha256", 00:19:56.308 "sha384", 00:19:56.308 "sha512" 00:19:56.308 ], 00:19:56.308 "dhchap_dhgroups": [ 00:19:56.308 "null", 00:19:56.308 "ffdhe2048", 00:19:56.308 "ffdhe3072", 00:19:56.308 "ffdhe4096", 00:19:56.308 "ffdhe6144", 00:19:56.308 "ffdhe8192" 00:19:56.308 ] 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "bdev_nvme_set_hotplug", 00:19:56.308 "params": { 00:19:56.308 "period_us": 100000, 00:19:56.308 "enable": false 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "bdev_malloc_create", 00:19:56.308 "params": { 00:19:56.308 "name": "malloc0", 00:19:56.308 "num_blocks": 8192, 00:19:56.308 "block_size": 4096, 00:19:56.308 "physical_block_size": 4096, 00:19:56.308 "uuid": "69dc340a-480a-42e0-9f9f-fd1be8666a25", 00:19:56.308 "optimal_io_boundary": 0, 00:19:56.308 "md_size": 0, 00:19:56.308 "dif_type": 0, 00:19:56.308 "dif_is_head_of_md": false, 00:19:56.308 "dif_pi_format": 0 00:19:56.308 } 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "method": "bdev_wait_for_examine" 00:19:56.308 } 00:19:56.308 ] 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "subsystem": "nbd", 00:19:56.308 "config": [] 00:19:56.308 }, 00:19:56.308 { 00:19:56.308 "subsystem": "scheduler", 00:19:56.308 "config": [ 00:19:56.308 { 00:19:56.308 "method": "framework_set_scheduler", 00:19:56.308 "params": { 00:19:56.308 "name": "static" 00:19:56.309 } 00:19:56.309 } 00:19:56.309 ] 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "subsystem": "nvmf", 00:19:56.309 "config": [ 00:19:56.309 { 00:19:56.309 "method": "nvmf_set_config", 00:19:56.309 "params": { 00:19:56.309 "discovery_filter": "match_any", 00:19:56.309 "admin_cmd_passthru": { 00:19:56.309 "identify_ctrlr": false 00:19:56.309 }, 00:19:56.309 "dhchap_digests": [ 00:19:56.309 "sha256", 00:19:56.309 "sha384", 00:19:56.309 "sha512" 00:19:56.309 ], 00:19:56.309 "dhchap_dhgroups": [ 00:19:56.309 "null", 00:19:56.309 "ffdhe2048", 00:19:56.309 "ffdhe3072", 00:19:56.309 "ffdhe4096", 00:19:56.309 "ffdhe6144", 00:19:56.309 "ffdhe8192" 00:19:56.309 ] 00:19:56.309 } 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "method": "nvmf_set_max_subsystems", 00:19:56.309 "params": { 00:19:56.309 "max_subsystems": 1024 00:19:56.309 } 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "method": "nvmf_set_crdt", 00:19:56.309 "params": { 00:19:56.309 "crdt1": 0, 00:19:56.309 "crdt2": 0, 00:19:56.309 "crdt3": 0 00:19:56.309 } 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "method": "nvmf_create_transport", 00:19:56.309 "params": { 00:19:56.309 "trtype": "TCP", 00:19:56.309 "max_queue_depth": 128, 00:19:56.309 "max_io_qpairs_per_ctrlr": 127, 00:19:56.309 "in_capsule_data_size": 4096, 00:19:56.309 "max_io_size": 131072, 00:19:56.309 "io_unit_size": 131072, 00:19:56.309 "max_aq_depth": 128, 00:19:56.309 "num_shared_buffers": 511, 00:19:56.309 "buf_cache_size": 4294967295, 00:19:56.309 "dif_insert_or_strip": false, 00:19:56.309 "zcopy": false, 00:19:56.309 "c2h_success": false, 00:19:56.309 "sock_priority": 0, 00:19:56.309 "abort_timeout_sec": 1, 00:19:56.309 "ack_timeout": 0, 00:19:56.309 "data_wr_pool_size": 0 00:19:56.309 } 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "method": "nvmf_create_subsystem", 00:19:56.309 "params": { 00:19:56.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.309 "allow_any_host": false, 00:19:56.309 "serial_number": "SPDK00000000000001", 00:19:56.309 "model_number": "SPDK bdev Controller", 00:19:56.309 "max_namespaces": 10, 00:19:56.309 "min_cntlid": 1, 00:19:56.309 "max_cntlid": 65519, 00:19:56.309 "ana_reporting": false 00:19:56.309 } 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "method": "nvmf_subsystem_add_host", 00:19:56.309 "params": { 00:19:56.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.309 "host": "nqn.2016-06.io.spdk:host1", 00:19:56.309 "psk": "key0" 00:19:56.309 } 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "method": "nvmf_subsystem_add_ns", 00:19:56.309 "params": { 00:19:56.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.309 "namespace": { 00:19:56.309 "nsid": 1, 00:19:56.309 "bdev_name": "malloc0", 00:19:56.309 "nguid": "69DC340A480A42E09F9FFD1BE8666A25", 00:19:56.309 "uuid": "69dc340a-480a-42e0-9f9f-fd1be8666a25", 00:19:56.309 "no_auto_visible": false 00:19:56.309 } 00:19:56.309 } 00:19:56.309 }, 00:19:56.309 { 00:19:56.309 "method": "nvmf_subsystem_add_listener", 00:19:56.309 "params": { 00:19:56.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.309 "listen_address": { 00:19:56.309 "trtype": "TCP", 00:19:56.309 "adrfam": "IPv4", 00:19:56.309 "traddr": "10.0.0.2", 00:19:56.309 "trsvcid": "4420" 00:19:56.309 }, 00:19:56.309 "secure_channel": true 00:19:56.309 } 00:19:56.309 } 00:19:56.309 ] 00:19:56.309 } 00:19:56.309 ] 00:19:56.309 }' 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1784128 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1784128 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1784128 ']' 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:56.309 05:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.309 [2024-11-27 05:41:44.185920] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:56.309 [2024-11-27 05:41:44.185967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.309 [2024-11-27 05:41:44.252844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.309 [2024-11-27 05:41:44.292974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.309 [2024-11-27 05:41:44.293010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.309 [2024-11-27 05:41:44.293016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.309 [2024-11-27 05:41:44.293023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.309 [2024-11-27 05:41:44.293029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.309 [2024-11-27 05:41:44.293621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.570 [2024-11-27 05:41:44.505135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.570 [2024-11-27 05:41:44.537168] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.570 [2024-11-27 05:41:44.537395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1784195 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1784195 /var/tmp/bdevperf.sock 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1784195 ']' 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.141 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:57.141 "subsystems": [ 00:19:57.141 { 00:19:57.141 "subsystem": "keyring", 00:19:57.141 "config": [ 00:19:57.141 { 00:19:57.141 "method": "keyring_file_add_key", 00:19:57.141 "params": { 00:19:57.141 "name": "key0", 00:19:57.141 "path": "/tmp/tmp.AfYjBaYj1B" 00:19:57.141 } 00:19:57.141 } 00:19:57.141 ] 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "subsystem": "iobuf", 00:19:57.141 "config": [ 00:19:57.141 { 00:19:57.141 "method": "iobuf_set_options", 00:19:57.141 "params": { 00:19:57.141 "small_pool_count": 8192, 00:19:57.141 "large_pool_count": 1024, 00:19:57.141 "small_bufsize": 8192, 00:19:57.141 "large_bufsize": 135168, 00:19:57.141 "enable_numa": false 00:19:57.141 } 00:19:57.141 } 00:19:57.141 ] 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "subsystem": "sock", 00:19:57.141 "config": [ 00:19:57.141 { 00:19:57.141 "method": "sock_set_default_impl", 00:19:57.141 "params": { 00:19:57.141 "impl_name": "posix" 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "sock_impl_set_options", 00:19:57.141 "params": { 00:19:57.141 "impl_name": "ssl", 00:19:57.141 "recv_buf_size": 4096, 00:19:57.141 "send_buf_size": 4096, 00:19:57.141 "enable_recv_pipe": true, 00:19:57.141 "enable_quickack": false, 00:19:57.141 "enable_placement_id": 0, 00:19:57.141 "enable_zerocopy_send_server": true, 00:19:57.141 "enable_zerocopy_send_client": false, 00:19:57.141 "zerocopy_threshold": 0, 00:19:57.141 "tls_version": 0, 00:19:57.141 "enable_ktls": false 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "sock_impl_set_options", 00:19:57.141 "params": { 00:19:57.141 "impl_name": "posix", 00:19:57.141 "recv_buf_size": 2097152, 00:19:57.141 "send_buf_size": 2097152, 00:19:57.141 "enable_recv_pipe": true, 00:19:57.141 "enable_quickack": false, 00:19:57.141 "enable_placement_id": 0, 00:19:57.141 "enable_zerocopy_send_server": true, 00:19:57.141 "enable_zerocopy_send_client": false, 00:19:57.141 "zerocopy_threshold": 0, 00:19:57.141 "tls_version": 0, 00:19:57.141 "enable_ktls": false 00:19:57.141 } 00:19:57.141 } 00:19:57.141 ] 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "subsystem": "vmd", 00:19:57.141 "config": [] 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "subsystem": "accel", 00:19:57.141 "config": [ 00:19:57.141 { 00:19:57.141 "method": "accel_set_options", 00:19:57.141 "params": { 00:19:57.141 "small_cache_size": 128, 00:19:57.141 "large_cache_size": 16, 00:19:57.141 "task_count": 2048, 00:19:57.141 "sequence_count": 2048, 00:19:57.141 "buf_count": 2048 00:19:57.141 } 00:19:57.141 } 00:19:57.141 ] 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "subsystem": "bdev", 00:19:57.141 "config": [ 00:19:57.141 { 00:19:57.141 "method": "bdev_set_options", 00:19:57.141 "params": { 00:19:57.141 "bdev_io_pool_size": 65535, 00:19:57.141 "bdev_io_cache_size": 256, 00:19:57.141 "bdev_auto_examine": true, 00:19:57.141 "iobuf_small_cache_size": 128, 00:19:57.141 "iobuf_large_cache_size": 16 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "bdev_raid_set_options", 00:19:57.141 "params": { 00:19:57.141 "process_window_size_kb": 1024, 00:19:57.141 "process_max_bandwidth_mb_sec": 0 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "bdev_iscsi_set_options", 00:19:57.141 "params": { 00:19:57.141 "timeout_sec": 30 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "bdev_nvme_set_options", 00:19:57.141 "params": { 00:19:57.141 "action_on_timeout": "none", 00:19:57.141 "timeout_us": 0, 00:19:57.141 "timeout_admin_us": 0, 00:19:57.141 "keep_alive_timeout_ms": 10000, 00:19:57.141 "arbitration_burst": 0, 00:19:57.141 "low_priority_weight": 0, 00:19:57.141 "medium_priority_weight": 0, 00:19:57.141 "high_priority_weight": 0, 00:19:57.141 "nvme_adminq_poll_period_us": 10000, 00:19:57.141 "nvme_ioq_poll_period_us": 0, 00:19:57.141 "io_queue_requests": 512, 00:19:57.141 "delay_cmd_submit": true, 00:19:57.141 "transport_retry_count": 4, 00:19:57.141 "bdev_retry_count": 3, 00:19:57.141 "transport_ack_timeout": 0, 00:19:57.141 "ctrlr_loss_timeout_sec": 0, 00:19:57.141 "reconnect_delay_sec": 0, 00:19:57.141 "fast_io_fail_timeout_sec": 0, 00:19:57.141 "disable_auto_failback": false, 00:19:57.141 "generate_uuids": false, 00:19:57.141 "transport_tos": 0, 00:19:57.141 "nvme_error_stat": false, 00:19:57.141 "rdma_srq_size": 0, 00:19:57.141 "io_path_stat": false, 00:19:57.141 "allow_accel_sequence": false, 00:19:57.141 "rdma_max_cq_size": 0, 00:19:57.141 "rdma_cm_event_timeout_ms": 0, 00:19:57.141 "dhchap_digests": [ 00:19:57.141 "sha256", 00:19:57.141 "sha384", 00:19:57.141 "sha512" 00:19:57.141 ], 00:19:57.141 "dhchap_dhgroups": [ 00:19:57.141 "null", 00:19:57.141 "ffdhe2048", 00:19:57.141 "ffdhe3072", 00:19:57.141 "ffdhe4096", 00:19:57.141 "ffdhe6144", 00:19:57.141 "ffdhe8192" 00:19:57.141 ] 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "bdev_nvme_attach_controller", 00:19:57.141 "params": { 00:19:57.141 "name": "TLSTEST", 00:19:57.141 "trtype": "TCP", 00:19:57.141 "adrfam": "IPv4", 00:19:57.141 "traddr": "10.0.0.2", 00:19:57.141 "trsvcid": "4420", 00:19:57.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.141 "prchk_reftag": false, 00:19:57.141 "prchk_guard": false, 00:19:57.141 "ctrlr_loss_timeout_sec": 0, 00:19:57.141 "reconnect_delay_sec": 0, 00:19:57.141 "fast_io_fail_timeout_sec": 0, 00:19:57.141 "psk": "key0", 00:19:57.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.141 "hdgst": false, 00:19:57.141 "ddgst": false, 00:19:57.141 "multipath": "multipath" 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "bdev_nvme_set_hotplug", 00:19:57.141 "params": { 00:19:57.141 "period_us": 100000, 00:19:57.141 "enable": false 00:19:57.141 } 00:19:57.141 }, 00:19:57.141 { 00:19:57.141 "method": "bdev_wait_for_examine" 00:19:57.142 } 00:19:57.142 ] 00:19:57.142 }, 00:19:57.142 { 00:19:57.142 "subsystem": "nbd", 00:19:57.142 "config": [] 00:19:57.142 } 00:19:57.142 ] 00:19:57.142 }' 00:19:57.142 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.142 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.142 [2024-11-27 05:41:45.093143] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:19:57.142 [2024-11-27 05:41:45.093193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784195 ] 00:19:57.401 [2024-11-27 05:41:45.167407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.401 [2024-11-27 05:41:45.209144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.401 [2024-11-27 05:41:45.362592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.969 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.969 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:57.969 05:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:58.227 Running I/O for 10 seconds... 00:20:00.096 5030.00 IOPS, 19.65 MiB/s [2024-11-27T04:41:49.033Z] 5137.50 IOPS, 20.07 MiB/s [2024-11-27T04:41:50.404Z] 5070.00 IOPS, 19.80 MiB/s [2024-11-27T04:41:51.338Z] 5058.75 IOPS, 19.76 MiB/s [2024-11-27T04:41:52.273Z] 5171.60 IOPS, 20.20 MiB/s [2024-11-27T04:41:53.212Z] 5232.00 IOPS, 20.44 MiB/s [2024-11-27T04:41:54.174Z] 5279.71 IOPS, 20.62 MiB/s [2024-11-27T04:41:55.209Z] 5313.00 IOPS, 20.75 MiB/s [2024-11-27T04:41:56.145Z] 5318.67 IOPS, 20.78 MiB/s [2024-11-27T04:41:56.145Z] 5349.30 IOPS, 20.90 MiB/s 00:20:08.141 Latency(us) 00:20:08.141 [2024-11-27T04:41:56.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.141 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:08.141 Verification LBA range: start 0x0 length 0x2000 00:20:08.141 TLSTESTn1 : 10.01 5354.95 20.92 0.00 0.00 23869.65 5305.30 30458.64 00:20:08.141 [2024-11-27T04:41:56.145Z] =================================================================================================================== 00:20:08.141 [2024-11-27T04:41:56.145Z] Total : 5354.95 20.92 0.00 0.00 23869.65 5305.30 30458.64 00:20:08.141 { 00:20:08.141 "results": [ 00:20:08.141 { 00:20:08.141 "job": "TLSTESTn1", 00:20:08.141 "core_mask": "0x4", 00:20:08.141 "workload": "verify", 00:20:08.141 "status": "finished", 00:20:08.141 "verify_range": { 00:20:08.141 "start": 0, 00:20:08.141 "length": 8192 00:20:08.141 }, 00:20:08.141 "queue_depth": 128, 00:20:08.141 "io_size": 4096, 00:20:08.141 "runtime": 10.01279, 00:20:08.141 "iops": 5354.951017648427, 00:20:08.141 "mibps": 20.91777741268917, 00:20:08.141 "io_failed": 0, 00:20:08.141 "io_timeout": 0, 00:20:08.142 "avg_latency_us": 23869.647223089618, 00:20:08.142 "min_latency_us": 5305.295238095238, 00:20:08.142 "max_latency_us": 30458.63619047619 00:20:08.142 } 00:20:08.142 ], 00:20:08.142 "core_count": 1 00:20:08.142 } 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1784195 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1784195 ']' 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1784195 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784195 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784195' 00:20:08.142 killing process with pid 1784195 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1784195 00:20:08.142 Received shutdown signal, test time was about 10.000000 seconds 00:20:08.142 00:20:08.142 Latency(us) 00:20:08.142 [2024-11-27T04:41:56.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.142 [2024-11-27T04:41:56.146Z] =================================================================================================================== 00:20:08.142 [2024-11-27T04:41:56.146Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:08.142 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1784195 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1784128 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1784128 ']' 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1784128 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784128 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784128' 00:20:08.401 killing process with pid 1784128 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1784128 00:20:08.401 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1784128 00:20:08.660 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:08.660 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.660 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.660 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.660 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1786060 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1786060 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1786060 ']' 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.661 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.661 [2024-11-27 05:41:56.572410] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:08.661 [2024-11-27 05:41:56.572454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.661 [2024-11-27 05:41:56.651249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.920 [2024-11-27 05:41:56.690167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.920 [2024-11-27 05:41:56.690205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.920 [2024-11-27 05:41:56.690211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.920 [2024-11-27 05:41:56.690217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.920 [2024-11-27 05:41:56.690222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.920 [2024-11-27 05:41:56.690813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.AfYjBaYj1B 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.AfYjBaYj1B 00:20:08.920 05:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.180 [2024-11-27 05:41:56.996109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.180 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.439 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.439 [2024-11-27 05:41:57.381100] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.439 [2024-11-27 05:41:57.381327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.439 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:09.698 malloc0 00:20:09.698 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:09.957 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:20:10.217 05:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1786454 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1786454 /var/tmp/bdevperf.sock 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1786454 ']' 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.217 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.476 [2024-11-27 05:41:58.221067] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:10.476 [2024-11-27 05:41:58.221117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1786454 ] 00:20:10.476 [2024-11-27 05:41:58.297181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.476 [2024-11-27 05:41:58.337478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.476 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.476 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:10.477 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:20:10.735 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:10.994 [2024-11-27 05:41:58.794939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:10.994 nvme0n1 00:20:10.994 05:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:10.994 Running I/O for 1 seconds... 00:20:12.373 5204.00 IOPS, 20.33 MiB/s 00:20:12.373 Latency(us) 00:20:12.373 [2024-11-27T04:42:00.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.373 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.373 Verification LBA range: start 0x0 length 0x2000 00:20:12.373 nvme0n1 : 1.02 5239.69 20.47 0.00 0.00 24252.67 4681.14 21595.67 00:20:12.373 [2024-11-27T04:42:00.377Z] =================================================================================================================== 00:20:12.373 [2024-11-27T04:42:00.377Z] Total : 5239.69 20.47 0.00 0.00 24252.67 4681.14 21595.67 00:20:12.373 { 00:20:12.373 "results": [ 00:20:12.373 { 00:20:12.373 "job": "nvme0n1", 00:20:12.373 "core_mask": "0x2", 00:20:12.373 "workload": "verify", 00:20:12.373 "status": "finished", 00:20:12.373 "verify_range": { 00:20:12.373 "start": 0, 00:20:12.373 "length": 8192 00:20:12.373 }, 00:20:12.373 "queue_depth": 128, 00:20:12.373 "io_size": 4096, 00:20:12.373 "runtime": 1.017617, 00:20:12.373 "iops": 5239.69234004542, 00:20:12.373 "mibps": 20.46754820330242, 00:20:12.373 "io_failed": 0, 00:20:12.373 "io_timeout": 0, 00:20:12.373 "avg_latency_us": 24252.674551495016, 00:20:12.373 "min_latency_us": 4681.142857142857, 00:20:12.373 "max_latency_us": 21595.67238095238 00:20:12.373 } 00:20:12.373 ], 00:20:12.373 "core_count": 1 00:20:12.373 } 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1786454 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1786454 ']' 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1786454 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1786454 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1786454' 00:20:12.373 killing process with pid 1786454 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1786454 00:20:12.373 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.373 00:20:12.373 Latency(us) 00:20:12.373 [2024-11-27T04:42:00.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.373 [2024-11-27T04:42:00.377Z] =================================================================================================================== 00:20:12.373 [2024-11-27T04:42:00.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1786454 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1786060 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1786060 ']' 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1786060 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1786060 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1786060' 00:20:12.373 killing process with pid 1786060 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1786060 00:20:12.373 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1786060 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1786769 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1786769 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1786769 ']' 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.633 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.633 [2024-11-27 05:42:00.521876] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:12.633 [2024-11-27 05:42:00.521924] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.633 [2024-11-27 05:42:00.601916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.893 [2024-11-27 05:42:00.641005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.893 [2024-11-27 05:42:00.641041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.893 [2024-11-27 05:42:00.641049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.893 [2024-11-27 05:42:00.641055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.893 [2024-11-27 05:42:00.641060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.893 [2024-11-27 05:42:00.641655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.893 [2024-11-27 05:42:00.790408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.893 malloc0 00:20:12.893 [2024-11-27 05:42:00.818698] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.893 [2024-11-27 05:42:00.818910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1786909 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1786909 /var/tmp/bdevperf.sock 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1786909 ']' 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.893 05:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.152 [2024-11-27 05:42:00.897308] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:13.152 [2024-11-27 05:42:00.897352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1786909 ] 00:20:13.152 [2024-11-27 05:42:00.974108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.152 [2024-11-27 05:42:01.016262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.152 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.152 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.152 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AfYjBaYj1B 00:20:13.412 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:13.672 [2024-11-27 05:42:01.456727] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.672 nvme0n1 00:20:13.672 05:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.672 Running I/O for 1 seconds... 00:20:15.055 5274.00 IOPS, 20.60 MiB/s 00:20:15.055 Latency(us) 00:20:15.055 [2024-11-27T04:42:03.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.055 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.055 Verification LBA range: start 0x0 length 0x2000 00:20:15.055 nvme0n1 : 1.01 5334.69 20.84 0.00 0.00 23837.87 4930.80 34952.53 00:20:15.055 [2024-11-27T04:42:03.059Z] =================================================================================================================== 00:20:15.055 [2024-11-27T04:42:03.059Z] Total : 5334.69 20.84 0.00 0.00 23837.87 4930.80 34952.53 00:20:15.055 { 00:20:15.055 "results": [ 00:20:15.055 { 00:20:15.055 "job": "nvme0n1", 00:20:15.055 "core_mask": "0x2", 00:20:15.055 "workload": "verify", 00:20:15.055 "status": "finished", 00:20:15.055 "verify_range": { 00:20:15.055 "start": 0, 00:20:15.055 "length": 8192 00:20:15.055 }, 00:20:15.055 "queue_depth": 128, 00:20:15.055 "io_size": 4096, 00:20:15.055 "runtime": 1.012805, 00:20:15.055 "iops": 5334.689303469078, 00:20:15.055 "mibps": 20.838630091676087, 00:20:15.055 "io_failed": 0, 00:20:15.055 "io_timeout": 0, 00:20:15.055 "avg_latency_us": 23837.87309695672, 00:20:15.055 "min_latency_us": 4930.80380952381, 00:20:15.055 "max_latency_us": 34952.53333333333 00:20:15.055 } 00:20:15.055 ], 00:20:15.055 "core_count": 1 00:20:15.055 } 00:20:15.055 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:15.055 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.055 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.055 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.055 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:15.055 "subsystems": [ 00:20:15.055 { 00:20:15.055 "subsystem": "keyring", 00:20:15.055 "config": [ 00:20:15.055 { 00:20:15.056 "method": "keyring_file_add_key", 00:20:15.056 "params": { 00:20:15.056 "name": "key0", 00:20:15.056 "path": "/tmp/tmp.AfYjBaYj1B" 00:20:15.056 } 00:20:15.056 } 00:20:15.056 ] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "iobuf", 00:20:15.056 "config": [ 00:20:15.056 { 00:20:15.056 "method": "iobuf_set_options", 00:20:15.056 "params": { 00:20:15.056 "small_pool_count": 8192, 00:20:15.056 "large_pool_count": 1024, 00:20:15.056 "small_bufsize": 8192, 00:20:15.056 "large_bufsize": 135168, 00:20:15.056 "enable_numa": false 00:20:15.056 } 00:20:15.056 } 00:20:15.056 ] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "sock", 00:20:15.056 "config": [ 00:20:15.056 { 00:20:15.056 "method": "sock_set_default_impl", 00:20:15.056 "params": { 00:20:15.056 "impl_name": "posix" 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "sock_impl_set_options", 00:20:15.056 "params": { 00:20:15.056 "impl_name": "ssl", 00:20:15.056 "recv_buf_size": 4096, 00:20:15.056 "send_buf_size": 4096, 00:20:15.056 "enable_recv_pipe": true, 00:20:15.056 "enable_quickack": false, 00:20:15.056 "enable_placement_id": 0, 00:20:15.056 "enable_zerocopy_send_server": true, 00:20:15.056 "enable_zerocopy_send_client": false, 00:20:15.056 "zerocopy_threshold": 0, 00:20:15.056 "tls_version": 0, 00:20:15.056 "enable_ktls": false 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "sock_impl_set_options", 00:20:15.056 "params": { 00:20:15.056 "impl_name": "posix", 00:20:15.056 "recv_buf_size": 2097152, 00:20:15.056 "send_buf_size": 2097152, 00:20:15.056 "enable_recv_pipe": true, 00:20:15.056 "enable_quickack": false, 00:20:15.056 "enable_placement_id": 0, 00:20:15.056 "enable_zerocopy_send_server": true, 00:20:15.056 "enable_zerocopy_send_client": false, 00:20:15.056 "zerocopy_threshold": 0, 00:20:15.056 "tls_version": 0, 00:20:15.056 "enable_ktls": false 00:20:15.056 } 00:20:15.056 } 00:20:15.056 ] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "vmd", 00:20:15.056 "config": [] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "accel", 00:20:15.056 "config": [ 00:20:15.056 { 00:20:15.056 "method": "accel_set_options", 00:20:15.056 "params": { 00:20:15.056 "small_cache_size": 128, 00:20:15.056 "large_cache_size": 16, 00:20:15.056 "task_count": 2048, 00:20:15.056 "sequence_count": 2048, 00:20:15.056 "buf_count": 2048 00:20:15.056 } 00:20:15.056 } 00:20:15.056 ] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "bdev", 00:20:15.056 "config": [ 00:20:15.056 { 00:20:15.056 "method": "bdev_set_options", 00:20:15.056 "params": { 00:20:15.056 "bdev_io_pool_size": 65535, 00:20:15.056 "bdev_io_cache_size": 256, 00:20:15.056 "bdev_auto_examine": true, 00:20:15.056 "iobuf_small_cache_size": 128, 00:20:15.056 "iobuf_large_cache_size": 16 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "bdev_raid_set_options", 00:20:15.056 "params": { 00:20:15.056 "process_window_size_kb": 1024, 00:20:15.056 "process_max_bandwidth_mb_sec": 0 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "bdev_iscsi_set_options", 00:20:15.056 "params": { 00:20:15.056 "timeout_sec": 30 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "bdev_nvme_set_options", 00:20:15.056 "params": { 00:20:15.056 "action_on_timeout": "none", 00:20:15.056 "timeout_us": 0, 00:20:15.056 "timeout_admin_us": 0, 00:20:15.056 "keep_alive_timeout_ms": 10000, 00:20:15.056 "arbitration_burst": 0, 00:20:15.056 "low_priority_weight": 0, 00:20:15.056 "medium_priority_weight": 0, 00:20:15.056 "high_priority_weight": 0, 00:20:15.056 "nvme_adminq_poll_period_us": 10000, 00:20:15.056 "nvme_ioq_poll_period_us": 0, 00:20:15.056 "io_queue_requests": 0, 00:20:15.056 "delay_cmd_submit": true, 00:20:15.056 "transport_retry_count": 4, 00:20:15.056 "bdev_retry_count": 3, 00:20:15.056 "transport_ack_timeout": 0, 00:20:15.056 "ctrlr_loss_timeout_sec": 0, 00:20:15.056 "reconnect_delay_sec": 0, 00:20:15.056 "fast_io_fail_timeout_sec": 0, 00:20:15.056 "disable_auto_failback": false, 00:20:15.056 "generate_uuids": false, 00:20:15.056 "transport_tos": 0, 00:20:15.056 "nvme_error_stat": false, 00:20:15.056 "rdma_srq_size": 0, 00:20:15.056 "io_path_stat": false, 00:20:15.056 "allow_accel_sequence": false, 00:20:15.056 "rdma_max_cq_size": 0, 00:20:15.056 "rdma_cm_event_timeout_ms": 0, 00:20:15.056 "dhchap_digests": [ 00:20:15.056 "sha256", 00:20:15.056 "sha384", 00:20:15.056 "sha512" 00:20:15.056 ], 00:20:15.056 "dhchap_dhgroups": [ 00:20:15.056 "null", 00:20:15.056 "ffdhe2048", 00:20:15.056 "ffdhe3072", 00:20:15.056 "ffdhe4096", 00:20:15.056 "ffdhe6144", 00:20:15.056 "ffdhe8192" 00:20:15.056 ] 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "bdev_nvme_set_hotplug", 00:20:15.056 "params": { 00:20:15.056 "period_us": 100000, 00:20:15.056 "enable": false 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "bdev_malloc_create", 00:20:15.056 "params": { 00:20:15.056 "name": "malloc0", 00:20:15.056 "num_blocks": 8192, 00:20:15.056 "block_size": 4096, 00:20:15.056 "physical_block_size": 4096, 00:20:15.056 "uuid": "e9408c11-5373-44ae-8d63-26107693c879", 00:20:15.056 "optimal_io_boundary": 0, 00:20:15.056 "md_size": 0, 00:20:15.056 "dif_type": 0, 00:20:15.056 "dif_is_head_of_md": false, 00:20:15.056 "dif_pi_format": 0 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "bdev_wait_for_examine" 00:20:15.056 } 00:20:15.056 ] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "nbd", 00:20:15.056 "config": [] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "scheduler", 00:20:15.056 "config": [ 00:20:15.056 { 00:20:15.056 "method": "framework_set_scheduler", 00:20:15.056 "params": { 00:20:15.056 "name": "static" 00:20:15.056 } 00:20:15.056 } 00:20:15.056 ] 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "subsystem": "nvmf", 00:20:15.056 "config": [ 00:20:15.056 { 00:20:15.056 "method": "nvmf_set_config", 00:20:15.056 "params": { 00:20:15.056 "discovery_filter": "match_any", 00:20:15.056 "admin_cmd_passthru": { 00:20:15.056 "identify_ctrlr": false 00:20:15.056 }, 00:20:15.056 "dhchap_digests": [ 00:20:15.056 "sha256", 00:20:15.056 "sha384", 00:20:15.056 "sha512" 00:20:15.056 ], 00:20:15.056 "dhchap_dhgroups": [ 00:20:15.056 "null", 00:20:15.056 "ffdhe2048", 00:20:15.056 "ffdhe3072", 00:20:15.056 "ffdhe4096", 00:20:15.056 "ffdhe6144", 00:20:15.056 "ffdhe8192" 00:20:15.056 ] 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "nvmf_set_max_subsystems", 00:20:15.056 "params": { 00:20:15.056 "max_subsystems": 1024 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "nvmf_set_crdt", 00:20:15.056 "params": { 00:20:15.056 "crdt1": 0, 00:20:15.056 "crdt2": 0, 00:20:15.056 "crdt3": 0 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "nvmf_create_transport", 00:20:15.056 "params": { 00:20:15.056 "trtype": "TCP", 00:20:15.056 "max_queue_depth": 128, 00:20:15.056 "max_io_qpairs_per_ctrlr": 127, 00:20:15.056 "in_capsule_data_size": 4096, 00:20:15.056 "max_io_size": 131072, 00:20:15.056 "io_unit_size": 131072, 00:20:15.056 "max_aq_depth": 128, 00:20:15.056 "num_shared_buffers": 511, 00:20:15.056 "buf_cache_size": 4294967295, 00:20:15.056 "dif_insert_or_strip": false, 00:20:15.056 "zcopy": false, 00:20:15.056 "c2h_success": false, 00:20:15.056 "sock_priority": 0, 00:20:15.056 "abort_timeout_sec": 1, 00:20:15.056 "ack_timeout": 0, 00:20:15.056 "data_wr_pool_size": 0 00:20:15.056 } 00:20:15.056 }, 00:20:15.056 { 00:20:15.056 "method": "nvmf_create_subsystem", 00:20:15.056 "params": { 00:20:15.056 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.056 "allow_any_host": false, 00:20:15.056 "serial_number": "00000000000000000000", 00:20:15.056 "model_number": "SPDK bdev Controller", 00:20:15.056 "max_namespaces": 32, 00:20:15.056 "min_cntlid": 1, 00:20:15.056 "max_cntlid": 65519, 00:20:15.056 "ana_reporting": false 00:20:15.056 } 00:20:15.056 }, 00:20:15.057 { 00:20:15.057 "method": "nvmf_subsystem_add_host", 00:20:15.057 "params": { 00:20:15.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.057 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.057 "psk": "key0" 00:20:15.057 } 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "method": "nvmf_subsystem_add_ns", 00:20:15.057 "params": { 00:20:15.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.057 "namespace": { 00:20:15.057 "nsid": 1, 00:20:15.057 "bdev_name": "malloc0", 00:20:15.057 "nguid": "E9408C11537344AE8D6326107693C879", 00:20:15.057 "uuid": "e9408c11-5373-44ae-8d63-26107693c879", 00:20:15.057 "no_auto_visible": false 00:20:15.057 } 00:20:15.057 } 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "method": "nvmf_subsystem_add_listener", 00:20:15.057 "params": { 00:20:15.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.057 "listen_address": { 00:20:15.057 "trtype": "TCP", 00:20:15.057 "adrfam": "IPv4", 00:20:15.057 "traddr": "10.0.0.2", 00:20:15.057 "trsvcid": "4420" 00:20:15.057 }, 00:20:15.057 "secure_channel": false, 00:20:15.057 "sock_impl": "ssl" 00:20:15.057 } 00:20:15.057 } 00:20:15.057 ] 00:20:15.057 } 00:20:15.057 ] 00:20:15.057 }' 00:20:15.057 05:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:15.057 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:15.057 "subsystems": [ 00:20:15.057 { 00:20:15.057 "subsystem": "keyring", 00:20:15.057 "config": [ 00:20:15.057 { 00:20:15.057 "method": "keyring_file_add_key", 00:20:15.057 "params": { 00:20:15.057 "name": "key0", 00:20:15.057 "path": "/tmp/tmp.AfYjBaYj1B" 00:20:15.057 } 00:20:15.057 } 00:20:15.057 ] 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "subsystem": "iobuf", 00:20:15.057 "config": [ 00:20:15.057 { 00:20:15.057 "method": "iobuf_set_options", 00:20:15.057 "params": { 00:20:15.057 "small_pool_count": 8192, 00:20:15.057 "large_pool_count": 1024, 00:20:15.057 "small_bufsize": 8192, 00:20:15.057 "large_bufsize": 135168, 00:20:15.057 "enable_numa": false 00:20:15.057 } 00:20:15.057 } 00:20:15.057 ] 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "subsystem": "sock", 00:20:15.057 "config": [ 00:20:15.057 { 00:20:15.057 "method": "sock_set_default_impl", 00:20:15.057 "params": { 00:20:15.057 "impl_name": "posix" 00:20:15.057 } 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "method": "sock_impl_set_options", 00:20:15.057 "params": { 00:20:15.057 "impl_name": "ssl", 00:20:15.057 "recv_buf_size": 4096, 00:20:15.057 "send_buf_size": 4096, 00:20:15.057 "enable_recv_pipe": true, 00:20:15.057 "enable_quickack": false, 00:20:15.057 "enable_placement_id": 0, 00:20:15.057 "enable_zerocopy_send_server": true, 00:20:15.057 "enable_zerocopy_send_client": false, 00:20:15.057 "zerocopy_threshold": 0, 00:20:15.057 "tls_version": 0, 00:20:15.057 "enable_ktls": false 00:20:15.057 } 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "method": "sock_impl_set_options", 00:20:15.057 "params": { 00:20:15.057 "impl_name": "posix", 00:20:15.057 "recv_buf_size": 2097152, 00:20:15.057 "send_buf_size": 2097152, 00:20:15.057 "enable_recv_pipe": true, 00:20:15.057 "enable_quickack": false, 00:20:15.057 "enable_placement_id": 0, 00:20:15.057 "enable_zerocopy_send_server": true, 00:20:15.057 "enable_zerocopy_send_client": false, 00:20:15.057 "zerocopy_threshold": 0, 00:20:15.057 "tls_version": 0, 00:20:15.057 "enable_ktls": false 00:20:15.057 } 00:20:15.057 } 00:20:15.057 ] 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "subsystem": "vmd", 00:20:15.057 "config": [] 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "subsystem": "accel", 00:20:15.057 "config": [ 00:20:15.057 { 00:20:15.057 "method": "accel_set_options", 00:20:15.057 "params": { 00:20:15.057 "small_cache_size": 128, 00:20:15.057 "large_cache_size": 16, 00:20:15.057 "task_count": 2048, 00:20:15.057 "sequence_count": 2048, 00:20:15.057 "buf_count": 2048 00:20:15.057 } 00:20:15.057 } 00:20:15.057 ] 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "subsystem": "bdev", 00:20:15.057 "config": [ 00:20:15.057 { 00:20:15.057 "method": "bdev_set_options", 00:20:15.057 "params": { 00:20:15.057 "bdev_io_pool_size": 65535, 00:20:15.057 "bdev_io_cache_size": 256, 00:20:15.057 "bdev_auto_examine": true, 00:20:15.057 "iobuf_small_cache_size": 128, 00:20:15.057 "iobuf_large_cache_size": 16 00:20:15.057 } 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "method": "bdev_raid_set_options", 00:20:15.057 "params": { 00:20:15.057 "process_window_size_kb": 1024, 00:20:15.057 "process_max_bandwidth_mb_sec": 0 00:20:15.057 } 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "method": "bdev_iscsi_set_options", 00:20:15.057 "params": { 00:20:15.057 "timeout_sec": 30 00:20:15.057 } 00:20:15.057 }, 00:20:15.057 { 00:20:15.057 "method": "bdev_nvme_set_options", 00:20:15.057 "params": { 00:20:15.057 "action_on_timeout": "none", 00:20:15.057 "timeout_us": 0, 00:20:15.057 "timeout_admin_us": 0, 00:20:15.057 "keep_alive_timeout_ms": 10000, 00:20:15.057 "arbitration_burst": 0, 00:20:15.057 "low_priority_weight": 0, 00:20:15.057 "medium_priority_weight": 0, 00:20:15.057 "high_priority_weight": 0, 00:20:15.057 "nvme_adminq_poll_period_us": 10000, 00:20:15.057 "nvme_ioq_poll_period_us": 0, 00:20:15.057 "io_queue_requests": 512, 00:20:15.057 "delay_cmd_submit": true, 00:20:15.057 "transport_retry_count": 4, 00:20:15.057 "bdev_retry_count": 3, 00:20:15.057 "transport_ack_timeout": 0, 00:20:15.057 "ctrlr_loss_timeout_sec": 0, 00:20:15.057 "reconnect_delay_sec": 0, 00:20:15.057 "fast_io_fail_timeout_sec": 0, 00:20:15.057 "disable_auto_failback": false, 00:20:15.057 "generate_uuids": false, 00:20:15.058 "transport_tos": 0, 00:20:15.058 "nvme_error_stat": false, 00:20:15.058 "rdma_srq_size": 0, 00:20:15.058 "io_path_stat": false, 00:20:15.058 "allow_accel_sequence": false, 00:20:15.058 "rdma_max_cq_size": 0, 00:20:15.058 "rdma_cm_event_timeout_ms": 0, 00:20:15.058 "dhchap_digests": [ 00:20:15.058 "sha256", 00:20:15.058 "sha384", 00:20:15.058 "sha512" 00:20:15.058 ], 00:20:15.058 "dhchap_dhgroups": [ 00:20:15.058 "null", 00:20:15.058 "ffdhe2048", 00:20:15.058 "ffdhe3072", 00:20:15.058 "ffdhe4096", 00:20:15.058 "ffdhe6144", 00:20:15.058 "ffdhe8192" 00:20:15.058 ] 00:20:15.058 } 00:20:15.058 }, 00:20:15.058 { 00:20:15.058 "method": "bdev_nvme_attach_controller", 00:20:15.058 "params": { 00:20:15.058 "name": "nvme0", 00:20:15.058 "trtype": "TCP", 00:20:15.058 "adrfam": "IPv4", 00:20:15.058 "traddr": "10.0.0.2", 00:20:15.058 "trsvcid": "4420", 00:20:15.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.058 "prchk_reftag": false, 00:20:15.058 "prchk_guard": false, 00:20:15.058 "ctrlr_loss_timeout_sec": 0, 00:20:15.058 "reconnect_delay_sec": 0, 00:20:15.058 "fast_io_fail_timeout_sec": 0, 00:20:15.058 "psk": "key0", 00:20:15.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:15.058 "hdgst": false, 00:20:15.058 "ddgst": false, 00:20:15.058 "multipath": "multipath" 00:20:15.058 } 00:20:15.058 }, 00:20:15.058 { 00:20:15.058 "method": "bdev_nvme_set_hotplug", 00:20:15.058 "params": { 00:20:15.058 "period_us": 100000, 00:20:15.058 "enable": false 00:20:15.058 } 00:20:15.058 }, 00:20:15.058 { 00:20:15.058 "method": "bdev_enable_histogram", 00:20:15.058 "params": { 00:20:15.058 "name": "nvme0n1", 00:20:15.058 "enable": true 00:20:15.058 } 00:20:15.058 }, 00:20:15.058 { 00:20:15.058 "method": "bdev_wait_for_examine" 00:20:15.058 } 00:20:15.058 ] 00:20:15.058 }, 00:20:15.058 { 00:20:15.058 "subsystem": "nbd", 00:20:15.058 "config": [] 00:20:15.058 } 00:20:15.058 ] 00:20:15.058 }' 00:20:15.058 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1786909 00:20:15.058 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1786909 ']' 00:20:15.058 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1786909 00:20:15.058 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.058 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.058 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1786909 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1786909' 00:20:15.317 killing process with pid 1786909 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1786909 00:20:15.317 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.317 00:20:15.317 Latency(us) 00:20:15.317 [2024-11-27T04:42:03.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.317 [2024-11-27T04:42:03.321Z] =================================================================================================================== 00:20:15.317 [2024-11-27T04:42:03.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1786909 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1786769 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1786769 ']' 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1786769 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1786769 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1786769' 00:20:15.317 killing process with pid 1786769 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1786769 00:20:15.317 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1786769 00:20:15.577 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:15.577 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.577 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.577 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:15.577 "subsystems": [ 00:20:15.577 { 00:20:15.577 "subsystem": "keyring", 00:20:15.577 "config": [ 00:20:15.577 { 00:20:15.577 "method": "keyring_file_add_key", 00:20:15.577 "params": { 00:20:15.577 "name": "key0", 00:20:15.577 "path": "/tmp/tmp.AfYjBaYj1B" 00:20:15.577 } 00:20:15.577 } 00:20:15.577 ] 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "subsystem": "iobuf", 00:20:15.577 "config": [ 00:20:15.577 { 00:20:15.577 "method": "iobuf_set_options", 00:20:15.577 "params": { 00:20:15.577 "small_pool_count": 8192, 00:20:15.577 "large_pool_count": 1024, 00:20:15.577 "small_bufsize": 8192, 00:20:15.577 "large_bufsize": 135168, 00:20:15.577 "enable_numa": false 00:20:15.577 } 00:20:15.577 } 00:20:15.577 ] 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "subsystem": "sock", 00:20:15.577 "config": [ 00:20:15.577 { 00:20:15.577 "method": "sock_set_default_impl", 00:20:15.577 "params": { 00:20:15.577 "impl_name": "posix" 00:20:15.577 } 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "method": "sock_impl_set_options", 00:20:15.577 "params": { 00:20:15.577 "impl_name": "ssl", 00:20:15.577 "recv_buf_size": 4096, 00:20:15.577 "send_buf_size": 4096, 00:20:15.577 "enable_recv_pipe": true, 00:20:15.577 "enable_quickack": false, 00:20:15.577 "enable_placement_id": 0, 00:20:15.577 "enable_zerocopy_send_server": true, 00:20:15.577 "enable_zerocopy_send_client": false, 00:20:15.577 "zerocopy_threshold": 0, 00:20:15.577 "tls_version": 0, 00:20:15.577 "enable_ktls": false 00:20:15.577 } 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "method": "sock_impl_set_options", 00:20:15.577 "params": { 00:20:15.577 "impl_name": "posix", 00:20:15.577 "recv_buf_size": 2097152, 00:20:15.577 "send_buf_size": 2097152, 00:20:15.577 "enable_recv_pipe": true, 00:20:15.577 "enable_quickack": false, 00:20:15.577 "enable_placement_id": 0, 00:20:15.577 "enable_zerocopy_send_server": true, 00:20:15.577 "enable_zerocopy_send_client": false, 00:20:15.577 "zerocopy_threshold": 0, 00:20:15.577 "tls_version": 0, 00:20:15.577 "enable_ktls": false 00:20:15.577 } 00:20:15.577 } 00:20:15.577 ] 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "subsystem": "vmd", 00:20:15.577 "config": [] 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "subsystem": "accel", 00:20:15.577 "config": [ 00:20:15.577 { 00:20:15.577 "method": "accel_set_options", 00:20:15.577 "params": { 00:20:15.577 "small_cache_size": 128, 00:20:15.577 "large_cache_size": 16, 00:20:15.577 "task_count": 2048, 00:20:15.577 "sequence_count": 2048, 00:20:15.577 "buf_count": 2048 00:20:15.577 } 00:20:15.577 } 00:20:15.577 ] 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "subsystem": "bdev", 00:20:15.577 "config": [ 00:20:15.577 { 00:20:15.577 "method": "bdev_set_options", 00:20:15.577 "params": { 00:20:15.577 "bdev_io_pool_size": 65535, 00:20:15.577 "bdev_io_cache_size": 256, 00:20:15.577 "bdev_auto_examine": true, 00:20:15.577 "iobuf_small_cache_size": 128, 00:20:15.577 "iobuf_large_cache_size": 16 00:20:15.577 } 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "method": "bdev_raid_set_options", 00:20:15.577 "params": { 00:20:15.577 "process_window_size_kb": 1024, 00:20:15.577 "process_max_bandwidth_mb_sec": 0 00:20:15.577 } 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "method": "bdev_iscsi_set_options", 00:20:15.577 "params": { 00:20:15.577 "timeout_sec": 30 00:20:15.577 } 00:20:15.577 }, 00:20:15.577 { 00:20:15.577 "method": "bdev_nvme_set_options", 00:20:15.577 "params": { 00:20:15.577 "action_on_timeout": "none", 00:20:15.577 "timeout_us": 0, 00:20:15.577 "timeout_admin_us": 0, 00:20:15.577 "keep_alive_timeout_ms": 10000, 00:20:15.577 "arbitration_burst": 0, 00:20:15.577 "low_priority_weight": 0, 00:20:15.577 "medium_priority_weight": 0, 00:20:15.578 "high_priority_weight": 0, 00:20:15.578 "nvme_adminq_poll_period_us": 10000, 00:20:15.578 "nvme_ioq_poll_period_us": 0, 00:20:15.578 "io_queue_requests": 0, 00:20:15.578 "delay_cmd_submit": true, 00:20:15.578 "transport_retry_count": 4, 00:20:15.578 "bdev_retry_count": 3, 00:20:15.578 "transport_ack_timeout": 0, 00:20:15.578 "ctrlr_loss_timeout_sec": 0, 00:20:15.578 "reconnect_delay_sec": 0, 00:20:15.578 "fast_io_fail_timeout_sec": 0, 00:20:15.578 "disable_auto_failback": false, 00:20:15.578 "generate_uuids": false, 00:20:15.578 "transport_tos": 0, 00:20:15.578 "nvme_error_stat": false, 00:20:15.578 "rdma_srq_size": 0, 00:20:15.578 "io_path_stat": false, 00:20:15.578 "allow_accel_sequence": false, 00:20:15.578 "rdma_max_cq_size": 0, 00:20:15.578 "rdma_cm_event_timeout_ms": 0, 00:20:15.578 "dhchap_digests": [ 00:20:15.578 "sha256", 00:20:15.578 "sha384", 00:20:15.578 "sha512" 00:20:15.578 ], 00:20:15.578 "dhchap_dhgroups": [ 00:20:15.578 "null", 00:20:15.578 "ffdhe2048", 00:20:15.578 "ffdhe3072", 00:20:15.578 "ffdhe4096", 00:20:15.578 "ffdhe6144", 00:20:15.578 "ffdhe8192" 00:20:15.578 ] 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "bdev_nvme_set_hotplug", 00:20:15.578 "params": { 00:20:15.578 "period_us": 100000, 00:20:15.578 "enable": false 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "bdev_malloc_create", 00:20:15.578 "params": { 00:20:15.578 "name": "malloc0", 00:20:15.578 "num_blocks": 8192, 00:20:15.578 "block_size": 4096, 00:20:15.578 "physical_block_size": 4096, 00:20:15.578 "uuid": "e9408c11-5373-44ae-8d63-26107693c879", 00:20:15.578 "optimal_io_boundary": 0, 00:20:15.578 "md_size": 0, 00:20:15.578 "dif_type": 0, 00:20:15.578 "dif_is_head_of_md": false, 00:20:15.578 "dif_pi_format": 0 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "bdev_wait_for_examine" 00:20:15.578 } 00:20:15.578 ] 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "subsystem": "nbd", 00:20:15.578 "config": [] 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "subsystem": "scheduler", 00:20:15.578 "config": [ 00:20:15.578 { 00:20:15.578 "method": "framework_set_scheduler", 00:20:15.578 "params": { 00:20:15.578 "name": "static" 00:20:15.578 } 00:20:15.578 } 00:20:15.578 ] 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "subsystem": "nvmf", 00:20:15.578 "config": [ 00:20:15.578 { 00:20:15.578 "method": "nvmf_set_config", 00:20:15.578 "params": { 00:20:15.578 "discovery_filter": "match_any", 00:20:15.578 "admin_cmd_passthru": { 00:20:15.578 "identify_ctrlr": false 00:20:15.578 }, 00:20:15.578 "dhchap_digests": [ 00:20:15.578 "sha256", 00:20:15.578 "sha384", 00:20:15.578 "sha512" 00:20:15.578 ], 00:20:15.578 "dhchap_dhgroups": [ 00:20:15.578 "null", 00:20:15.578 "ffdhe2048", 00:20:15.578 "ffdhe3072", 00:20:15.578 "ffdhe4096", 00:20:15.578 "ffdhe6144", 00:20:15.578 "ffdhe8192" 00:20:15.578 ] 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "nvmf_set_max_subsystems", 00:20:15.578 "params": { 00:20:15.578 "max_subsystems": 1024 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "nvmf_set_crdt", 00:20:15.578 "params": { 00:20:15.578 "crdt1": 0, 00:20:15.578 "crdt2": 0, 00:20:15.578 "crdt3": 0 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "nvmf_create_transport", 00:20:15.578 "params": { 00:20:15.578 "trtype": "TCP", 00:20:15.578 "max_queue_depth": 128, 00:20:15.578 "max_io_qpairs_per_ctrlr": 127, 00:20:15.578 "in_capsule_data_size": 4096, 00:20:15.578 "max_io_size": 131072, 00:20:15.578 "io_unit_size": 131072, 00:20:15.578 "max_aq_depth": 128, 00:20:15.578 "num_shared_buffers": 511, 00:20:15.578 "buf_cache_size": 4294967295, 00:20:15.578 "dif_insert_or_strip": false, 00:20:15.578 "zcopy": false, 00:20:15.578 "c2h_success": false, 00:20:15.578 "sock_priority": 0, 00:20:15.578 "abort_timeout_sec": 1, 00:20:15.578 "ack_timeout": 0, 00:20:15.578 "data_wr_pool_size": 0 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "nvmf_create_subsystem", 00:20:15.578 "params": { 00:20:15.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.578 "allow_any_host": false, 00:20:15.578 "serial_number": "00000000000000000000", 00:20:15.578 "model_number": "SPDK bdev Controller", 00:20:15.578 "max_namespaces": 32, 00:20:15.578 "min_cntlid": 1, 00:20:15.578 "max_cntlid": 65519, 00:20:15.578 "ana_reporting": false 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "nvmf_subsystem_add_host", 00:20:15.578 "params": { 00:20:15.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.578 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.578 "psk": "key0" 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "nvmf_subsystem_add_ns", 00:20:15.578 "params": { 00:20:15.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.578 "namespace": { 00:20:15.578 "nsid": 1, 00:20:15.578 "bdev_name": "malloc0", 00:20:15.578 "nguid": "E9408C11537344AE8D6326107693C879", 00:20:15.578 "uuid": "e9408c11-5373-44ae-8d63-26107693c879", 00:20:15.578 "no_auto_visible": false 00:20:15.578 } 00:20:15.578 } 00:20:15.578 }, 00:20:15.578 { 00:20:15.578 "method": "nvmf_subsystem_add_listener", 00:20:15.578 "params": { 00:20:15.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.578 "listen_address": { 00:20:15.578 "trtype": "TCP", 00:20:15.578 "adrfam": "IPv4", 00:20:15.578 "traddr": "10.0.0.2", 00:20:15.578 "trsvcid": "4420" 00:20:15.578 }, 00:20:15.578 "secure_channel": false, 00:20:15.578 "sock_impl": "ssl" 00:20:15.578 } 00:20:15.579 } 00:20:15.579 ] 00:20:15.579 } 00:20:15.579 ] 00:20:15.579 }' 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1787368 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1787368 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1787368 ']' 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.579 05:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.579 [2024-11-27 05:42:03.544607] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:15.579 [2024-11-27 05:42:03.544652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.838 [2024-11-27 05:42:03.622567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.838 [2024-11-27 05:42:03.663355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.838 [2024-11-27 05:42:03.663392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.838 [2024-11-27 05:42:03.663400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.838 [2024-11-27 05:42:03.663406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.838 [2024-11-27 05:42:03.663411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.838 [2024-11-27 05:42:03.664027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.097 [2024-11-27 05:42:03.878956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.097 [2024-11-27 05:42:03.910978] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:16.097 [2024-11-27 05:42:03.911204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1787610 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1787610 /var/tmp/bdevperf.sock 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1787610 ']' 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.668 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:16.668 "subsystems": [ 00:20:16.668 { 00:20:16.668 "subsystem": "keyring", 00:20:16.668 "config": [ 00:20:16.668 { 00:20:16.668 "method": "keyring_file_add_key", 00:20:16.668 "params": { 00:20:16.668 "name": "key0", 00:20:16.668 "path": "/tmp/tmp.AfYjBaYj1B" 00:20:16.668 } 00:20:16.668 } 00:20:16.668 ] 00:20:16.668 }, 00:20:16.668 { 00:20:16.668 "subsystem": "iobuf", 00:20:16.668 "config": [ 00:20:16.668 { 00:20:16.668 "method": "iobuf_set_options", 00:20:16.669 "params": { 00:20:16.669 "small_pool_count": 8192, 00:20:16.669 "large_pool_count": 1024, 00:20:16.669 "small_bufsize": 8192, 00:20:16.669 "large_bufsize": 135168, 00:20:16.669 "enable_numa": false 00:20:16.669 } 00:20:16.669 } 00:20:16.669 ] 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "subsystem": "sock", 00:20:16.669 "config": [ 00:20:16.669 { 00:20:16.669 "method": "sock_set_default_impl", 00:20:16.669 "params": { 00:20:16.669 "impl_name": "posix" 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "sock_impl_set_options", 00:20:16.669 "params": { 00:20:16.669 "impl_name": "ssl", 00:20:16.669 "recv_buf_size": 4096, 00:20:16.669 "send_buf_size": 4096, 00:20:16.669 "enable_recv_pipe": true, 00:20:16.669 "enable_quickack": false, 00:20:16.669 "enable_placement_id": 0, 00:20:16.669 "enable_zerocopy_send_server": true, 00:20:16.669 "enable_zerocopy_send_client": false, 00:20:16.669 "zerocopy_threshold": 0, 00:20:16.669 "tls_version": 0, 00:20:16.669 "enable_ktls": false 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "sock_impl_set_options", 00:20:16.669 "params": { 00:20:16.669 "impl_name": "posix", 00:20:16.669 "recv_buf_size": 2097152, 00:20:16.669 "send_buf_size": 2097152, 00:20:16.669 "enable_recv_pipe": true, 00:20:16.669 "enable_quickack": false, 00:20:16.669 "enable_placement_id": 0, 00:20:16.669 "enable_zerocopy_send_server": true, 00:20:16.669 "enable_zerocopy_send_client": false, 00:20:16.669 "zerocopy_threshold": 0, 00:20:16.669 "tls_version": 0, 00:20:16.669 "enable_ktls": false 00:20:16.669 } 00:20:16.669 } 00:20:16.669 ] 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "subsystem": "vmd", 00:20:16.669 "config": [] 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "subsystem": "accel", 00:20:16.669 "config": [ 00:20:16.669 { 00:20:16.669 "method": "accel_set_options", 00:20:16.669 "params": { 00:20:16.669 "small_cache_size": 128, 00:20:16.669 "large_cache_size": 16, 00:20:16.669 "task_count": 2048, 00:20:16.669 "sequence_count": 2048, 00:20:16.669 "buf_count": 2048 00:20:16.669 } 00:20:16.669 } 00:20:16.669 ] 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "subsystem": "bdev", 00:20:16.669 "config": [ 00:20:16.669 { 00:20:16.669 "method": "bdev_set_options", 00:20:16.669 "params": { 00:20:16.669 "bdev_io_pool_size": 65535, 00:20:16.669 "bdev_io_cache_size": 256, 00:20:16.669 "bdev_auto_examine": true, 00:20:16.669 "iobuf_small_cache_size": 128, 00:20:16.669 "iobuf_large_cache_size": 16 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "bdev_raid_set_options", 00:20:16.669 "params": { 00:20:16.669 "process_window_size_kb": 1024, 00:20:16.669 "process_max_bandwidth_mb_sec": 0 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "bdev_iscsi_set_options", 00:20:16.669 "params": { 00:20:16.669 "timeout_sec": 30 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "bdev_nvme_set_options", 00:20:16.669 "params": { 00:20:16.669 "action_on_timeout": "none", 00:20:16.669 "timeout_us": 0, 00:20:16.669 "timeout_admin_us": 0, 00:20:16.669 "keep_alive_timeout_ms": 10000, 00:20:16.669 "arbitration_burst": 0, 00:20:16.669 "low_priority_weight": 0, 00:20:16.669 "medium_priority_weight": 0, 00:20:16.669 "high_priority_weight": 0, 00:20:16.669 "nvme_adminq_poll_period_us": 10000, 00:20:16.669 "nvme_ioq_poll_period_us": 0, 00:20:16.669 "io_queue_requests": 512, 00:20:16.669 "delay_cmd_submit": true, 00:20:16.669 "transport_retry_count": 4, 00:20:16.669 "bdev_retry_count": 3, 00:20:16.669 "transport_ack_timeout": 0, 00:20:16.669 "ctrlr_loss_timeout_sec": 0, 00:20:16.669 "reconnect_delay_sec": 0, 00:20:16.669 "fast_io_fail_timeout_sec": 0, 00:20:16.669 "disable_auto_failback": false, 00:20:16.669 "generate_uuids": false, 00:20:16.669 "transport_tos": 0, 00:20:16.669 "nvme_error_stat": false, 00:20:16.669 "rdma_srq_size": 0, 00:20:16.669 "io_path_stat": false, 00:20:16.669 "allow_accel_sequence": false, 00:20:16.669 "rdma_max_cq_size": 0, 00:20:16.669 "rdma_cm_event_timeout_ms": 0, 00:20:16.669 "dhchap_digests": [ 00:20:16.669 "sha256", 00:20:16.669 "sha384", 00:20:16.669 "sha512" 00:20:16.669 ], 00:20:16.669 "dhchap_dhgroups": [ 00:20:16.669 "null", 00:20:16.669 "ffdhe2048", 00:20:16.669 "ffdhe3072", 00:20:16.669 "ffdhe4096", 00:20:16.669 "ffdhe6144", 00:20:16.669 "ffdhe8192" 00:20:16.669 ] 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "bdev_nvme_attach_controller", 00:20:16.669 "params": { 00:20:16.669 "name": "nvme0", 00:20:16.669 "trtype": "TCP", 00:20:16.669 "adrfam": "IPv4", 00:20:16.669 "traddr": "10.0.0.2", 00:20:16.669 "trsvcid": "4420", 00:20:16.669 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.669 "prchk_reftag": false, 00:20:16.669 "prchk_guard": false, 00:20:16.669 "ctrlr_loss_timeout_sec": 0, 00:20:16.669 "reconnect_delay_sec": 0, 00:20:16.669 "fast_io_fail_timeout_sec": 0, 00:20:16.669 "psk": "key0", 00:20:16.669 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.669 "hdgst": false, 00:20:16.669 "ddgst": false, 00:20:16.669 "multipath": "multipath" 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "bdev_nvme_set_hotplug", 00:20:16.669 "params": { 00:20:16.669 "period_us": 100000, 00:20:16.669 "enable": false 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "bdev_enable_histogram", 00:20:16.669 "params": { 00:20:16.669 "name": "nvme0n1", 00:20:16.669 "enable": true 00:20:16.669 } 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "method": "bdev_wait_for_examine" 00:20:16.669 } 00:20:16.669 ] 00:20:16.669 }, 00:20:16.669 { 00:20:16.669 "subsystem": "nbd", 00:20:16.669 "config": [] 00:20:16.669 } 00:20:16.669 ] 00:20:16.669 }' 00:20:16.669 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.669 05:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.669 [2024-11-27 05:42:04.466720] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:16.669 [2024-11-27 05:42:04.466767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1787610 ] 00:20:16.669 [2024-11-27 05:42:04.540390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.670 [2024-11-27 05:42:04.581448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.928 [2024-11-27 05:42:04.734945] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.497 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.497 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:17.497 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:17.497 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:17.755 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.755 05:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.755 Running I/O for 1 seconds... 00:20:18.692 5270.00 IOPS, 20.59 MiB/s 00:20:18.692 Latency(us) 00:20:18.692 [2024-11-27T04:42:06.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.692 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:18.692 Verification LBA range: start 0x0 length 0x2000 00:20:18.692 nvme0n1 : 1.02 5298.41 20.70 0.00 0.00 23951.70 6210.32 24716.43 00:20:18.692 [2024-11-27T04:42:06.696Z] =================================================================================================================== 00:20:18.692 [2024-11-27T04:42:06.696Z] Total : 5298.41 20.70 0.00 0.00 23951.70 6210.32 24716.43 00:20:18.692 { 00:20:18.692 "results": [ 00:20:18.692 { 00:20:18.692 "job": "nvme0n1", 00:20:18.692 "core_mask": "0x2", 00:20:18.692 "workload": "verify", 00:20:18.692 "status": "finished", 00:20:18.692 "verify_range": { 00:20:18.692 "start": 0, 00:20:18.692 "length": 8192 00:20:18.692 }, 00:20:18.692 "queue_depth": 128, 00:20:18.692 "io_size": 4096, 00:20:18.692 "runtime": 1.018797, 00:20:18.692 "iops": 5298.405864956414, 00:20:18.692 "mibps": 20.696897909985992, 00:20:18.692 "io_failed": 0, 00:20:18.692 "io_timeout": 0, 00:20:18.692 "avg_latency_us": 23951.703085798974, 00:20:18.692 "min_latency_us": 6210.31619047619, 00:20:18.692 "max_latency_us": 24716.434285714287 00:20:18.692 } 00:20:18.692 ], 00:20:18.692 "core_count": 1 00:20:18.692 } 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:18.692 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:18.692 nvmf_trace.0 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1787610 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1787610 ']' 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1787610 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1787610 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1787610' 00:20:18.951 killing process with pid 1787610 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1787610 00:20:18.951 Received shutdown signal, test time was about 1.000000 seconds 00:20:18.951 00:20:18.951 Latency(us) 00:20:18.951 [2024-11-27T04:42:06.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.951 [2024-11-27T04:42:06.955Z] =================================================================================================================== 00:20:18.951 [2024-11-27T04:42:06.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1787610 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:18.951 05:42:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.210 rmmod nvme_tcp 00:20:19.210 rmmod nvme_fabrics 00:20:19.210 rmmod nvme_keyring 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1787368 ']' 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1787368 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1787368 ']' 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1787368 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1787368 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1787368' 00:20:19.210 killing process with pid 1787368 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1787368 00:20:19.210 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1787368 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.470 05:42:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.STg93ETkDX /tmp/tmp.dBf86zKJWX /tmp/tmp.AfYjBaYj1B 00:20:21.386 00:20:21.386 real 1m19.234s 00:20:21.386 user 2m0.817s 00:20:21.386 sys 0m30.799s 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.386 ************************************ 00:20:21.386 END TEST nvmf_tls 00:20:21.386 ************************************ 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.386 05:42:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.648 ************************************ 00:20:21.648 START TEST nvmf_fips 00:20:21.648 ************************************ 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:21.648 * Looking for test storage... 00:20:21.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.648 --rc genhtml_branch_coverage=1 00:20:21.648 --rc genhtml_function_coverage=1 00:20:21.648 --rc genhtml_legend=1 00:20:21.648 --rc geninfo_all_blocks=1 00:20:21.648 --rc geninfo_unexecuted_blocks=1 00:20:21.648 00:20:21.648 ' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.648 --rc genhtml_branch_coverage=1 00:20:21.648 --rc genhtml_function_coverage=1 00:20:21.648 --rc genhtml_legend=1 00:20:21.648 --rc geninfo_all_blocks=1 00:20:21.648 --rc geninfo_unexecuted_blocks=1 00:20:21.648 00:20:21.648 ' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.648 --rc genhtml_branch_coverage=1 00:20:21.648 --rc genhtml_function_coverage=1 00:20:21.648 --rc genhtml_legend=1 00:20:21.648 --rc geninfo_all_blocks=1 00:20:21.648 --rc geninfo_unexecuted_blocks=1 00:20:21.648 00:20:21.648 ' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.648 --rc genhtml_branch_coverage=1 00:20:21.648 --rc genhtml_function_coverage=1 00:20:21.648 --rc genhtml_legend=1 00:20:21.648 --rc geninfo_all_blocks=1 00:20:21.648 --rc geninfo_unexecuted_blocks=1 00:20:21.648 00:20:21.648 ' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.648 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:21.649 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:21.909 Error setting digest 00:20:21.909 40924707DA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:21.909 40924707DA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.909 05:42:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.478 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.478 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.478 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.478 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.478 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:28.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:20:28.479 00:20:28.479 --- 10.0.0.2 ping statistics --- 00:20:28.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.479 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:20:28.479 00:20:28.479 --- 10.0.0.1 ping statistics --- 00:20:28.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.479 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1792016 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1792016 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1792016 ']' 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.479 05:42:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.479 [2024-11-27 05:42:15.801252] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:28.479 [2024-11-27 05:42:15.801299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.479 [2024-11-27 05:42:15.876858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.479 [2024-11-27 05:42:15.921532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.479 [2024-11-27 05:42:15.921567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.479 [2024-11-27 05:42:15.921575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.479 [2024-11-27 05:42:15.921581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.479 [2024-11-27 05:42:15.921587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.479 [2024-11-27 05:42:15.922124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.DZA 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.DZA 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.DZA 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.DZA 00:20:28.738 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:28.998 [2024-11-27 05:42:16.843593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.998 [2024-11-27 05:42:16.859591] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.998 [2024-11-27 05:42:16.859809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.998 malloc0 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1792265 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1792265 /var/tmp/bdevperf.sock 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1792265 ']' 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.998 05:42:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:28.998 [2024-11-27 05:42:16.989529] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:28.998 [2024-11-27 05:42:16.989578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792265 ] 00:20:29.257 [2024-11-27 05:42:17.063168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.257 [2024-11-27 05:42:17.103047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.826 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.826 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:29.826 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.DZA 00:20:30.085 05:42:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:30.343 [2024-11-27 05:42:18.135724] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.343 TLSTESTn1 00:20:30.343 05:42:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.343 Running I/O for 10 seconds... 00:20:32.657 5460.00 IOPS, 21.33 MiB/s [2024-11-27T04:42:21.598Z] 5484.00 IOPS, 21.42 MiB/s [2024-11-27T04:42:22.536Z] 5499.67 IOPS, 21.48 MiB/s [2024-11-27T04:42:23.474Z] 5539.25 IOPS, 21.64 MiB/s [2024-11-27T04:42:24.412Z] 5548.00 IOPS, 21.67 MiB/s [2024-11-27T04:42:25.349Z] 5512.33 IOPS, 21.53 MiB/s [2024-11-27T04:42:26.729Z] 5527.00 IOPS, 21.59 MiB/s [2024-11-27T04:42:27.667Z] 5525.75 IOPS, 21.58 MiB/s [2024-11-27T04:42:28.604Z] 5535.78 IOPS, 21.62 MiB/s [2024-11-27T04:42:28.604Z] 5525.80 IOPS, 21.59 MiB/s 00:20:40.600 Latency(us) 00:20:40.600 [2024-11-27T04:42:28.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.600 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:40.600 Verification LBA range: start 0x0 length 0x2000 00:20:40.601 TLSTESTn1 : 10.02 5527.38 21.59 0.00 0.00 23117.27 6272.73 23218.47 00:20:40.601 [2024-11-27T04:42:28.605Z] =================================================================================================================== 00:20:40.601 [2024-11-27T04:42:28.605Z] Total : 5527.38 21.59 0.00 0.00 23117.27 6272.73 23218.47 00:20:40.601 { 00:20:40.601 "results": [ 00:20:40.601 { 00:20:40.601 "job": "TLSTESTn1", 00:20:40.601 "core_mask": "0x4", 00:20:40.601 "workload": "verify", 00:20:40.601 "status": "finished", 00:20:40.601 "verify_range": { 00:20:40.601 "start": 0, 00:20:40.601 "length": 8192 00:20:40.601 }, 00:20:40.601 "queue_depth": 128, 00:20:40.601 "io_size": 4096, 00:20:40.601 "runtime": 10.019943, 00:20:40.601 "iops": 5527.376752542405, 00:20:40.601 "mibps": 21.591315439618768, 00:20:40.601 "io_failed": 0, 00:20:40.601 "io_timeout": 0, 00:20:40.601 "avg_latency_us": 23117.27145429658, 00:20:40.601 "min_latency_us": 6272.731428571428, 00:20:40.601 "max_latency_us": 23218.46857142857 00:20:40.601 } 00:20:40.601 ], 00:20:40.601 "core_count": 1 00:20:40.601 } 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:40.601 nvmf_trace.0 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1792265 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1792265 ']' 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1792265 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1792265 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1792265' 00:20:40.601 killing process with pid 1792265 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1792265 00:20:40.601 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.601 00:20:40.601 Latency(us) 00:20:40.601 [2024-11-27T04:42:28.605Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.601 [2024-11-27T04:42:28.605Z] =================================================================================================================== 00:20:40.601 [2024-11-27T04:42:28.605Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.601 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1792265 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.861 rmmod nvme_tcp 00:20:40.861 rmmod nvme_fabrics 00:20:40.861 rmmod nvme_keyring 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1792016 ']' 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1792016 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1792016 ']' 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1792016 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1792016 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1792016' 00:20:40.861 killing process with pid 1792016 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1792016 00:20:40.861 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1792016 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.121 05:42:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.DZA 00:20:43.207 00:20:43.207 real 0m21.635s 00:20:43.207 user 0m23.239s 00:20:43.207 sys 0m9.706s 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:43.207 ************************************ 00:20:43.207 END TEST nvmf_fips 00:20:43.207 ************************************ 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.207 ************************************ 00:20:43.207 START TEST nvmf_control_msg_list 00:20:43.207 ************************************ 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:43.207 * Looking for test storage... 00:20:43.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:20:43.207 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:43.467 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:43.467 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.468 --rc genhtml_branch_coverage=1 00:20:43.468 --rc genhtml_function_coverage=1 00:20:43.468 --rc genhtml_legend=1 00:20:43.468 --rc geninfo_all_blocks=1 00:20:43.468 --rc geninfo_unexecuted_blocks=1 00:20:43.468 00:20:43.468 ' 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.468 --rc genhtml_branch_coverage=1 00:20:43.468 --rc genhtml_function_coverage=1 00:20:43.468 --rc genhtml_legend=1 00:20:43.468 --rc geninfo_all_blocks=1 00:20:43.468 --rc geninfo_unexecuted_blocks=1 00:20:43.468 00:20:43.468 ' 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.468 --rc genhtml_branch_coverage=1 00:20:43.468 --rc genhtml_function_coverage=1 00:20:43.468 --rc genhtml_legend=1 00:20:43.468 --rc geninfo_all_blocks=1 00:20:43.468 --rc geninfo_unexecuted_blocks=1 00:20:43.468 00:20:43.468 ' 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:43.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.468 --rc genhtml_branch_coverage=1 00:20:43.468 --rc genhtml_function_coverage=1 00:20:43.468 --rc genhtml_legend=1 00:20:43.468 --rc geninfo_all_blocks=1 00:20:43.468 --rc geninfo_unexecuted_blocks=1 00:20:43.468 00:20:43.468 ' 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.468 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.469 05:42:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:50.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:50.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:50.033 Found net devices under 0000:86:00.0: cvl_0_0 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:50.033 Found net devices under 0000:86:00.1: cvl_0_1 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.033 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.034 05:42:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:20:50.034 00:20:50.034 --- 10.0.0.2 ping statistics --- 00:20:50.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.034 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:20:50.034 00:20:50.034 --- 10.0.0.1 ping statistics --- 00:20:50.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.034 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1797635 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1797635 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1797635 ']' 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.034 [2024-11-27 05:42:37.320113] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:20:50.034 [2024-11-27 05:42:37.320164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.034 [2024-11-27 05:42:37.396910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.034 [2024-11-27 05:42:37.438423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.034 [2024-11-27 05:42:37.438458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.034 [2024-11-27 05:42:37.438464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.034 [2024-11-27 05:42:37.438470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.034 [2024-11-27 05:42:37.438475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.034 [2024-11-27 05:42:37.439006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.034 [2024-11-27 05:42:37.575267] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.034 Malloc0 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.034 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.035 [2024-11-27 05:42:37.611495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1797662 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1797663 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1797664 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1797662 00:20:50.035 05:42:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:50.035 [2024-11-27 05:42:37.710222] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.035 [2024-11-27 05:42:37.710398] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.035 [2024-11-27 05:42:37.710548] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.972 Initializing NVMe Controllers 00:20:50.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:50.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:50.972 Initialization complete. Launching workers. 00:20:50.972 ======================================================== 00:20:50.972 Latency(us) 00:20:50.972 Device Information : IOPS MiB/s Average min max 00:20:50.972 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41214.42 40790.44 41900.92 00:20:50.972 ======================================================== 00:20:50.972 Total : 25.00 0.10 41214.42 40790.44 41900.92 00:20:50.972 00:20:50.972 Initializing NVMe Controllers 00:20:50.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:50.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:50.972 Initialization complete. Launching workers. 00:20:50.972 ======================================================== 00:20:50.972 Latency(us) 00:20:50.972 Device Information : IOPS MiB/s Average min max 00:20:50.972 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41211.61 40623.38 41939.11 00:20:50.972 ======================================================== 00:20:50.972 Total : 25.00 0.10 41211.61 40623.38 41939.11 00:20:50.972 00:20:50.972 Initializing NVMe Controllers 00:20:50.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:50.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:50.972 Initialization complete. Launching workers. 00:20:50.972 ======================================================== 00:20:50.972 Latency(us) 00:20:50.972 Device Information : IOPS MiB/s Average min max 00:20:50.972 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41248.53 40577.64 41953.37 00:20:50.972 ======================================================== 00:20:50.972 Total : 25.00 0.10 41248.53 40577.64 41953.37 00:20:50.972 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1797663 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1797664 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.972 rmmod nvme_tcp 00:20:50.972 rmmod nvme_fabrics 00:20:50.972 rmmod nvme_keyring 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:50.972 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:50.973 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1797635 ']' 00:20:50.973 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1797635 00:20:50.973 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1797635 ']' 00:20:50.973 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1797635 00:20:50.973 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:50.973 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.973 05:42:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1797635 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1797635' 00:20:51.233 killing process with pid 1797635 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1797635 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1797635 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.233 05:42:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.770 00:20:53.770 real 0m10.144s 00:20:53.770 user 0m6.931s 00:20:53.770 sys 0m5.294s 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:53.770 ************************************ 00:20:53.770 END TEST nvmf_control_msg_list 00:20:53.770 ************************************ 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.770 ************************************ 00:20:53.770 START TEST nvmf_wait_for_buf 00:20:53.770 ************************************ 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:53.770 * Looking for test storage... 00:20:53.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.770 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:53.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.771 --rc genhtml_branch_coverage=1 00:20:53.771 --rc genhtml_function_coverage=1 00:20:53.771 --rc genhtml_legend=1 00:20:53.771 --rc geninfo_all_blocks=1 00:20:53.771 --rc geninfo_unexecuted_blocks=1 00:20:53.771 00:20:53.771 ' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:53.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.771 --rc genhtml_branch_coverage=1 00:20:53.771 --rc genhtml_function_coverage=1 00:20:53.771 --rc genhtml_legend=1 00:20:53.771 --rc geninfo_all_blocks=1 00:20:53.771 --rc geninfo_unexecuted_blocks=1 00:20:53.771 00:20:53.771 ' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:53.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.771 --rc genhtml_branch_coverage=1 00:20:53.771 --rc genhtml_function_coverage=1 00:20:53.771 --rc genhtml_legend=1 00:20:53.771 --rc geninfo_all_blocks=1 00:20:53.771 --rc geninfo_unexecuted_blocks=1 00:20:53.771 00:20:53.771 ' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:53.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.771 --rc genhtml_branch_coverage=1 00:20:53.771 --rc genhtml_function_coverage=1 00:20:53.771 --rc genhtml_legend=1 00:20:53.771 --rc geninfo_all_blocks=1 00:20:53.771 --rc geninfo_unexecuted_blocks=1 00:20:53.771 00:20:53.771 ' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.771 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.772 05:42:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:00.347 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:00.347 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.347 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:00.348 Found net devices under 0000:86:00.0: cvl_0_0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:00.348 Found net devices under 0000:86:00.1: cvl_0_1 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:21:00.348 00:21:00.348 --- 10.0.0.2 ping statistics --- 00:21:00.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.348 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:21:00.348 00:21:00.348 --- 10.0.0.1 ping statistics --- 00:21:00.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.348 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1801414 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1801414 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1801414 ']' 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.348 [2024-11-27 05:42:47.497665] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:21:00.348 [2024-11-27 05:42:47.497738] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.348 [2024-11-27 05:42:47.578799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.348 [2024-11-27 05:42:47.619235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.348 [2024-11-27 05:42:47.619271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.348 [2024-11-27 05:42:47.619278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.348 [2024-11-27 05:42:47.619284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.348 [2024-11-27 05:42:47.619290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.348 [2024-11-27 05:42:47.619854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:00.348 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 Malloc0 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 [2024-11-27 05:42:47.786199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.349 [2024-11-27 05:42:47.814394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.349 05:42:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:00.349 [2024-11-27 05:42:47.902747] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:01.289 Initializing NVMe Controllers 00:21:01.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:01.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:01.289 Initialization complete. Launching workers. 00:21:01.289 ======================================================== 00:21:01.289 Latency(us) 00:21:01.289 Device Information : IOPS MiB/s Average min max 00:21:01.289 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33539.31 30009.08 71093.70 00:21:01.289 ======================================================== 00:21:01.289 Total : 124.00 15.50 33539.31 30009.08 71093.70 00:21:01.289 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.549 rmmod nvme_tcp 00:21:01.549 rmmod nvme_fabrics 00:21:01.549 rmmod nvme_keyring 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1801414 ']' 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1801414 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1801414 ']' 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1801414 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801414 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801414' 00:21:01.549 killing process with pid 1801414 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1801414 00:21:01.549 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1801414 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.808 05:42:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.716 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:03.716 00:21:03.716 real 0m10.375s 00:21:03.716 user 0m3.934s 00:21:03.716 sys 0m4.878s 00:21:03.716 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.716 05:42:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.716 ************************************ 00:21:03.716 END TEST nvmf_wait_for_buf 00:21:03.716 ************************************ 00:21:03.977 05:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:03.977 05:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:03.977 05:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:03.977 05:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:03.977 05:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.977 05:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.552 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.553 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.553 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.553 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.553 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.553 ************************************ 00:21:10.553 START TEST nvmf_perf_adq 00:21:10.553 ************************************ 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:10.553 * Looking for test storage... 00:21:10.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.553 --rc genhtml_branch_coverage=1 00:21:10.553 --rc genhtml_function_coverage=1 00:21:10.553 --rc genhtml_legend=1 00:21:10.553 --rc geninfo_all_blocks=1 00:21:10.553 --rc geninfo_unexecuted_blocks=1 00:21:10.553 00:21:10.553 ' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.553 --rc genhtml_branch_coverage=1 00:21:10.553 --rc genhtml_function_coverage=1 00:21:10.553 --rc genhtml_legend=1 00:21:10.553 --rc geninfo_all_blocks=1 00:21:10.553 --rc geninfo_unexecuted_blocks=1 00:21:10.553 00:21:10.553 ' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.553 --rc genhtml_branch_coverage=1 00:21:10.553 --rc genhtml_function_coverage=1 00:21:10.553 --rc genhtml_legend=1 00:21:10.553 --rc geninfo_all_blocks=1 00:21:10.553 --rc geninfo_unexecuted_blocks=1 00:21:10.553 00:21:10.553 ' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:10.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.553 --rc genhtml_branch_coverage=1 00:21:10.553 --rc genhtml_function_coverage=1 00:21:10.553 --rc genhtml_legend=1 00:21:10.553 --rc geninfo_all_blocks=1 00:21:10.553 --rc geninfo_unexecuted_blocks=1 00:21:10.553 00:21:10.553 ' 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:10.553 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.554 05:42:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:15.830 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:15.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:15.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:15.831 Found net devices under 0000:86:00.0: cvl_0_0 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:15.831 Found net devices under 0000:86:00.1: cvl_0_1 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:15.831 05:43:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:16.399 05:43:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:18.936 05:43:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:24.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:24.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:24.244 Found net devices under 0000:86:00.0: cvl_0_0 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:24.244 Found net devices under 0000:86:00.1: cvl_0_1 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.244 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:21:24.245 00:21:24.245 --- 10.0.0.2 ping statistics --- 00:21:24.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.245 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:24.245 00:21:24.245 --- 10.0.0.1 ping statistics --- 00:21:24.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.245 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1809753 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1809753 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1809753 ']' 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.245 05:43:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.245 [2024-11-27 05:43:11.741833] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:21:24.245 [2024-11-27 05:43:11.741877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.245 [2024-11-27 05:43:11.821371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.245 [2024-11-27 05:43:11.863126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.245 [2024-11-27 05:43:11.863164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.245 [2024-11-27 05:43:11.863171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.245 [2024-11-27 05:43:11.863176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.245 [2024-11-27 05:43:11.863181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.245 [2024-11-27 05:43:11.864779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.245 [2024-11-27 05:43:11.864887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.245 [2024-11-27 05:43:11.864994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.245 [2024-11-27 05:43:11.864996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:24.814 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.815 [2024-11-27 05:43:12.757514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.815 Malloc1 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.815 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.074 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.074 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.074 [2024-11-27 05:43:12.819782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.074 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.074 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1810004 00:21:25.074 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:25.074 05:43:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:26.979 "tick_rate": 2100000000, 00:21:26.979 "poll_groups": [ 00:21:26.979 { 00:21:26.979 "name": "nvmf_tgt_poll_group_000", 00:21:26.979 "admin_qpairs": 1, 00:21:26.979 "io_qpairs": 1, 00:21:26.979 "current_admin_qpairs": 1, 00:21:26.979 "current_io_qpairs": 1, 00:21:26.979 "pending_bdev_io": 0, 00:21:26.979 "completed_nvme_io": 19727, 00:21:26.979 "transports": [ 00:21:26.979 { 00:21:26.979 "trtype": "TCP" 00:21:26.979 } 00:21:26.979 ] 00:21:26.979 }, 00:21:26.979 { 00:21:26.979 "name": "nvmf_tgt_poll_group_001", 00:21:26.979 "admin_qpairs": 0, 00:21:26.979 "io_qpairs": 1, 00:21:26.979 "current_admin_qpairs": 0, 00:21:26.979 "current_io_qpairs": 1, 00:21:26.979 "pending_bdev_io": 0, 00:21:26.979 "completed_nvme_io": 19628, 00:21:26.979 "transports": [ 00:21:26.979 { 00:21:26.979 "trtype": "TCP" 00:21:26.979 } 00:21:26.979 ] 00:21:26.979 }, 00:21:26.979 { 00:21:26.979 "name": "nvmf_tgt_poll_group_002", 00:21:26.979 "admin_qpairs": 0, 00:21:26.979 "io_qpairs": 1, 00:21:26.979 "current_admin_qpairs": 0, 00:21:26.979 "current_io_qpairs": 1, 00:21:26.979 "pending_bdev_io": 0, 00:21:26.979 "completed_nvme_io": 19477, 00:21:26.979 "transports": [ 00:21:26.979 { 00:21:26.979 "trtype": "TCP" 00:21:26.979 } 00:21:26.979 ] 00:21:26.979 }, 00:21:26.979 { 00:21:26.979 "name": "nvmf_tgt_poll_group_003", 00:21:26.979 "admin_qpairs": 0, 00:21:26.979 "io_qpairs": 1, 00:21:26.979 "current_admin_qpairs": 0, 00:21:26.979 "current_io_qpairs": 1, 00:21:26.979 "pending_bdev_io": 0, 00:21:26.979 "completed_nvme_io": 19642, 00:21:26.979 "transports": [ 00:21:26.979 { 00:21:26.979 "trtype": "TCP" 00:21:26.979 } 00:21:26.979 ] 00:21:26.979 } 00:21:26.979 ] 00:21:26.979 }' 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:26.979 05:43:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1810004 00:21:35.097 Initializing NVMe Controllers 00:21:35.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:35.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:35.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:35.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:35.097 Initialization complete. Launching workers. 00:21:35.097 ======================================================== 00:21:35.097 Latency(us) 00:21:35.097 Device Information : IOPS MiB/s Average min max 00:21:35.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10334.66 40.37 6193.06 2148.39 10145.61 00:21:35.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10477.25 40.93 6109.19 2377.83 13538.11 00:21:35.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10404.26 40.64 6151.20 1888.69 10053.52 00:21:35.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10416.76 40.69 6144.15 2319.68 10063.54 00:21:35.097 ======================================================== 00:21:35.097 Total : 41632.92 162.63 6149.25 1888.69 13538.11 00:21:35.097 00:21:35.097 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:35.097 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:35.097 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:35.097 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:35.097 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:35.097 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:35.097 05:43:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:35.097 rmmod nvme_tcp 00:21:35.097 rmmod nvme_fabrics 00:21:35.097 rmmod nvme_keyring 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1809753 ']' 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1809753 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1809753 ']' 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1809753 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1809753 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1809753' 00:21:35.097 killing process with pid 1809753 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1809753 00:21:35.097 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1809753 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.355 05:43:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.887 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:37.887 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:37.887 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:37.887 05:43:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:38.455 05:43:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:40.991 05:43:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:46.270 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.270 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:46.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:46.271 Found net devices under 0000:86:00.0: cvl_0_0 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:46.271 Found net devices under 0000:86:00.1: cvl_0_1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:21:46.271 00:21:46.271 --- 10.0.0.2 ping statistics --- 00:21:46.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.271 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:21:46.271 00:21:46.271 --- 10.0.0.1 ping statistics --- 00:21:46.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.271 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:46.271 net.core.busy_poll = 1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:46.271 net.core.busy_read = 1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:46.271 05:43:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:46.271 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1813783 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1813783 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1813783 ']' 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.272 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:46.272 [2024-11-27 05:43:34.098520] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:21:46.272 [2024-11-27 05:43:34.098568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.272 [2024-11-27 05:43:34.176081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.272 [2024-11-27 05:43:34.217574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.272 [2024-11-27 05:43:34.217612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.272 [2024-11-27 05:43:34.217619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.272 [2024-11-27 05:43:34.217625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.272 [2024-11-27 05:43:34.217630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.272 [2024-11-27 05:43:34.219070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.272 [2024-11-27 05:43:34.219177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.272 [2024-11-27 05:43:34.219283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.272 [2024-11-27 05:43:34.219285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 05:43:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 [2024-11-27 05:43:35.104146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 Malloc1 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:47.211 [2024-11-27 05:43:35.165551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1813965 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:47.211 05:43:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:49.750 "tick_rate": 2100000000, 00:21:49.750 "poll_groups": [ 00:21:49.750 { 00:21:49.750 "name": "nvmf_tgt_poll_group_000", 00:21:49.750 "admin_qpairs": 1, 00:21:49.750 "io_qpairs": 1, 00:21:49.750 "current_admin_qpairs": 1, 00:21:49.750 "current_io_qpairs": 1, 00:21:49.750 "pending_bdev_io": 0, 00:21:49.750 "completed_nvme_io": 27673, 00:21:49.750 "transports": [ 00:21:49.750 { 00:21:49.750 "trtype": "TCP" 00:21:49.750 } 00:21:49.750 ] 00:21:49.750 }, 00:21:49.750 { 00:21:49.750 "name": "nvmf_tgt_poll_group_001", 00:21:49.750 "admin_qpairs": 0, 00:21:49.750 "io_qpairs": 3, 00:21:49.750 "current_admin_qpairs": 0, 00:21:49.750 "current_io_qpairs": 3, 00:21:49.750 "pending_bdev_io": 0, 00:21:49.750 "completed_nvme_io": 30049, 00:21:49.750 "transports": [ 00:21:49.750 { 00:21:49.750 "trtype": "TCP" 00:21:49.750 } 00:21:49.750 ] 00:21:49.750 }, 00:21:49.750 { 00:21:49.750 "name": "nvmf_tgt_poll_group_002", 00:21:49.750 "admin_qpairs": 0, 00:21:49.750 "io_qpairs": 0, 00:21:49.750 "current_admin_qpairs": 0, 00:21:49.750 "current_io_qpairs": 0, 00:21:49.750 "pending_bdev_io": 0, 00:21:49.750 "completed_nvme_io": 0, 00:21:49.750 "transports": [ 00:21:49.750 { 00:21:49.750 "trtype": "TCP" 00:21:49.750 } 00:21:49.750 ] 00:21:49.750 }, 00:21:49.750 { 00:21:49.750 "name": "nvmf_tgt_poll_group_003", 00:21:49.750 "admin_qpairs": 0, 00:21:49.750 "io_qpairs": 0, 00:21:49.750 "current_admin_qpairs": 0, 00:21:49.750 "current_io_qpairs": 0, 00:21:49.750 "pending_bdev_io": 0, 00:21:49.750 "completed_nvme_io": 0, 00:21:49.750 "transports": [ 00:21:49.750 { 00:21:49.750 "trtype": "TCP" 00:21:49.750 } 00:21:49.750 ] 00:21:49.750 } 00:21:49.750 ] 00:21:49.750 }' 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:49.750 05:43:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1813965 00:21:57.875 Initializing NVMe Controllers 00:21:57.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:57.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:57.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:57.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:57.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:57.875 Initialization complete. Launching workers. 00:21:57.875 ======================================================== 00:21:57.875 Latency(us) 00:21:57.875 Device Information : IOPS MiB/s Average min max 00:21:57.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5377.50 21.01 11936.10 1779.00 59290.85 00:21:57.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5129.30 20.04 12475.83 1827.12 60033.78 00:21:57.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15449.00 60.35 4142.29 1355.29 45771.03 00:21:57.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5019.00 19.61 12750.22 1714.55 57334.72 00:21:57.875 ======================================================== 00:21:57.875 Total : 30974.79 121.00 8270.15 1355.29 60033.78 00:21:57.875 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.875 rmmod nvme_tcp 00:21:57.875 rmmod nvme_fabrics 00:21:57.875 rmmod nvme_keyring 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1813783 ']' 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1813783 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1813783 ']' 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1813783 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813783 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813783' 00:21:57.875 killing process with pid 1813783 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1813783 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1813783 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.875 05:43:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:59.783 00:21:59.783 real 0m50.376s 00:21:59.783 user 2m49.464s 00:21:59.783 sys 0m10.418s 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.783 ************************************ 00:21:59.783 END TEST nvmf_perf_adq 00:21:59.783 ************************************ 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.783 05:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.043 ************************************ 00:22:00.043 START TEST nvmf_shutdown 00:22:00.043 ************************************ 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:00.043 * Looking for test storage... 00:22:00.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:00.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.043 --rc genhtml_branch_coverage=1 00:22:00.043 --rc genhtml_function_coverage=1 00:22:00.043 --rc genhtml_legend=1 00:22:00.043 --rc geninfo_all_blocks=1 00:22:00.043 --rc geninfo_unexecuted_blocks=1 00:22:00.043 00:22:00.043 ' 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:00.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.043 --rc genhtml_branch_coverage=1 00:22:00.043 --rc genhtml_function_coverage=1 00:22:00.043 --rc genhtml_legend=1 00:22:00.043 --rc geninfo_all_blocks=1 00:22:00.043 --rc geninfo_unexecuted_blocks=1 00:22:00.043 00:22:00.043 ' 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:00.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.043 --rc genhtml_branch_coverage=1 00:22:00.043 --rc genhtml_function_coverage=1 00:22:00.043 --rc genhtml_legend=1 00:22:00.043 --rc geninfo_all_blocks=1 00:22:00.043 --rc geninfo_unexecuted_blocks=1 00:22:00.043 00:22:00.043 ' 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:00.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.043 --rc genhtml_branch_coverage=1 00:22:00.043 --rc genhtml_function_coverage=1 00:22:00.043 --rc genhtml_legend=1 00:22:00.043 --rc geninfo_all_blocks=1 00:22:00.043 --rc geninfo_unexecuted_blocks=1 00:22:00.043 00:22:00.043 ' 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.043 05:43:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.043 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.044 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:00.303 ************************************ 00:22:00.303 START TEST nvmf_shutdown_tc1 00:22:00.303 ************************************ 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.303 05:43:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.873 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.873 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:06.873 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.874 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.874 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:06.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:22:06.874 00:22:06.874 --- 10.0.0.2 ping statistics --- 00:22:06.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.874 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:22:06.874 00:22:06.874 --- 10.0.0.1 ping statistics --- 00:22:06.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.874 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:06.874 05:43:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1819254 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1819254 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1819254 ']' 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.874 [2024-11-27 05:43:54.080062] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:06.874 [2024-11-27 05:43:54.080107] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.874 [2024-11-27 05:43:54.158210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.874 [2024-11-27 05:43:54.197691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.874 [2024-11-27 05:43:54.197732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.874 [2024-11-27 05:43:54.197739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.874 [2024-11-27 05:43:54.197744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.874 [2024-11-27 05:43:54.197749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.874 [2024-11-27 05:43:54.199322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.874 [2024-11-27 05:43:54.199430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.874 [2024-11-27 05:43:54.199512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.874 [2024-11-27 05:43:54.199513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.874 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.875 [2024-11-27 05:43:54.349559] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:06.875 Malloc1 00:22:06.875 [2024-11-27 05:43:54.458891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.875 Malloc2 00:22:06.875 Malloc3 00:22:06.875 Malloc4 00:22:06.875 Malloc5 00:22:06.875 Malloc6 00:22:06.875 Malloc7 00:22:06.875 Malloc8 00:22:06.875 Malloc9 00:22:06.875 Malloc10 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.875 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1819316 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1819316 /var/tmp/bdevperf.sock 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1819316 ']' 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.135 { 00:22:07.135 "params": { 00:22:07.135 "name": "Nvme$subsystem", 00:22:07.135 "trtype": "$TEST_TRANSPORT", 00:22:07.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.135 "adrfam": "ipv4", 00:22:07.135 "trsvcid": "$NVMF_PORT", 00:22:07.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.135 "hdgst": ${hdgst:-false}, 00:22:07.135 "ddgst": ${ddgst:-false} 00:22:07.135 }, 00:22:07.135 "method": "bdev_nvme_attach_controller" 00:22:07.135 } 00:22:07.135 EOF 00:22:07.135 )") 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.135 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.135 { 00:22:07.135 "params": { 00:22:07.135 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 [2024-11-27 05:43:54.933799] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:07.136 [2024-11-27 05:43:54.933847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.136 { 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme$subsystem", 00:22:07.136 "trtype": "$TEST_TRANSPORT", 00:22:07.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "$NVMF_PORT", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.136 "hdgst": ${hdgst:-false}, 00:22:07.136 "ddgst": ${ddgst:-false} 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 } 00:22:07.136 EOF 00:22:07.136 )") 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:07.136 05:43:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme1", 00:22:07.136 "trtype": "tcp", 00:22:07.136 "traddr": "10.0.0.2", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "4420", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.136 "hdgst": false, 00:22:07.136 "ddgst": false 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 },{ 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme2", 00:22:07.136 "trtype": "tcp", 00:22:07.136 "traddr": "10.0.0.2", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "4420", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:07.136 "hdgst": false, 00:22:07.136 "ddgst": false 00:22:07.136 }, 00:22:07.136 "method": "bdev_nvme_attach_controller" 00:22:07.136 },{ 00:22:07.136 "params": { 00:22:07.136 "name": "Nvme3", 00:22:07.136 "trtype": "tcp", 00:22:07.136 "traddr": "10.0.0.2", 00:22:07.136 "adrfam": "ipv4", 00:22:07.136 "trsvcid": "4420", 00:22:07.136 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:07.136 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:07.136 "hdgst": false, 00:22:07.136 "ddgst": false 00:22:07.136 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 },{ 00:22:07.137 "params": { 00:22:07.137 "name": "Nvme4", 00:22:07.137 "trtype": "tcp", 00:22:07.137 "traddr": "10.0.0.2", 00:22:07.137 "adrfam": "ipv4", 00:22:07.137 "trsvcid": "4420", 00:22:07.137 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:07.137 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:07.137 "hdgst": false, 00:22:07.137 "ddgst": false 00:22:07.137 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 },{ 00:22:07.137 "params": { 00:22:07.137 "name": "Nvme5", 00:22:07.137 "trtype": "tcp", 00:22:07.137 "traddr": "10.0.0.2", 00:22:07.137 "adrfam": "ipv4", 00:22:07.137 "trsvcid": "4420", 00:22:07.137 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:07.137 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:07.137 "hdgst": false, 00:22:07.137 "ddgst": false 00:22:07.137 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 },{ 00:22:07.137 "params": { 00:22:07.137 "name": "Nvme6", 00:22:07.137 "trtype": "tcp", 00:22:07.137 "traddr": "10.0.0.2", 00:22:07.137 "adrfam": "ipv4", 00:22:07.137 "trsvcid": "4420", 00:22:07.137 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:07.137 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:07.137 "hdgst": false, 00:22:07.137 "ddgst": false 00:22:07.137 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 },{ 00:22:07.137 "params": { 00:22:07.137 "name": "Nvme7", 00:22:07.137 "trtype": "tcp", 00:22:07.137 "traddr": "10.0.0.2", 00:22:07.137 "adrfam": "ipv4", 00:22:07.137 "trsvcid": "4420", 00:22:07.137 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:07.137 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:07.137 "hdgst": false, 00:22:07.137 "ddgst": false 00:22:07.137 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 },{ 00:22:07.137 "params": { 00:22:07.137 "name": "Nvme8", 00:22:07.137 "trtype": "tcp", 00:22:07.137 "traddr": "10.0.0.2", 00:22:07.137 "adrfam": "ipv4", 00:22:07.137 "trsvcid": "4420", 00:22:07.137 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:07.137 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:07.137 "hdgst": false, 00:22:07.137 "ddgst": false 00:22:07.137 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 },{ 00:22:07.137 "params": { 00:22:07.137 "name": "Nvme9", 00:22:07.137 "trtype": "tcp", 00:22:07.137 "traddr": "10.0.0.2", 00:22:07.137 "adrfam": "ipv4", 00:22:07.137 "trsvcid": "4420", 00:22:07.137 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:07.137 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:07.137 "hdgst": false, 00:22:07.137 "ddgst": false 00:22:07.137 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 },{ 00:22:07.137 "params": { 00:22:07.137 "name": "Nvme10", 00:22:07.137 "trtype": "tcp", 00:22:07.137 "traddr": "10.0.0.2", 00:22:07.137 "adrfam": "ipv4", 00:22:07.137 "trsvcid": "4420", 00:22:07.137 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:07.137 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:07.137 "hdgst": false, 00:22:07.137 "ddgst": false 00:22:07.137 }, 00:22:07.137 "method": "bdev_nvme_attach_controller" 00:22:07.137 }' 00:22:07.137 [2024-11-27 05:43:55.029123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.137 [2024-11-27 05:43:55.070285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1819316 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:08.516 05:43:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:09.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1819316 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1819254 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.455 { 00:22:09.455 "params": { 00:22:09.455 "name": "Nvme$subsystem", 00:22:09.455 "trtype": "$TEST_TRANSPORT", 00:22:09.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.455 "adrfam": "ipv4", 00:22:09.455 "trsvcid": "$NVMF_PORT", 00:22:09.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.455 "hdgst": ${hdgst:-false}, 00:22:09.455 "ddgst": ${ddgst:-false} 00:22:09.455 }, 00:22:09.455 "method": "bdev_nvme_attach_controller" 00:22:09.455 } 00:22:09.455 EOF 00:22:09.455 )") 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.455 { 00:22:09.455 "params": { 00:22:09.455 "name": "Nvme$subsystem", 00:22:09.455 "trtype": "$TEST_TRANSPORT", 00:22:09.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.455 "adrfam": "ipv4", 00:22:09.455 "trsvcid": "$NVMF_PORT", 00:22:09.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.455 "hdgst": ${hdgst:-false}, 00:22:09.455 "ddgst": ${ddgst:-false} 00:22:09.455 }, 00:22:09.455 "method": "bdev_nvme_attach_controller" 00:22:09.455 } 00:22:09.455 EOF 00:22:09.455 )") 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.455 { 00:22:09.455 "params": { 00:22:09.455 "name": "Nvme$subsystem", 00:22:09.455 "trtype": "$TEST_TRANSPORT", 00:22:09.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.455 "adrfam": "ipv4", 00:22:09.455 "trsvcid": "$NVMF_PORT", 00:22:09.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.455 "hdgst": ${hdgst:-false}, 00:22:09.455 "ddgst": ${ddgst:-false} 00:22:09.455 }, 00:22:09.455 "method": "bdev_nvme_attach_controller" 00:22:09.455 } 00:22:09.455 EOF 00:22:09.455 )") 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.455 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.456 { 00:22:09.456 "params": { 00:22:09.456 "name": "Nvme$subsystem", 00:22:09.456 "trtype": "$TEST_TRANSPORT", 00:22:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.456 "adrfam": "ipv4", 00:22:09.456 "trsvcid": "$NVMF_PORT", 00:22:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.456 "hdgst": ${hdgst:-false}, 00:22:09.456 "ddgst": ${ddgst:-false} 00:22:09.456 }, 00:22:09.456 "method": "bdev_nvme_attach_controller" 00:22:09.456 } 00:22:09.456 EOF 00:22:09.456 )") 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.456 { 00:22:09.456 "params": { 00:22:09.456 "name": "Nvme$subsystem", 00:22:09.456 "trtype": "$TEST_TRANSPORT", 00:22:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.456 "adrfam": "ipv4", 00:22:09.456 "trsvcid": "$NVMF_PORT", 00:22:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.456 "hdgst": ${hdgst:-false}, 00:22:09.456 "ddgst": ${ddgst:-false} 00:22:09.456 }, 00:22:09.456 "method": "bdev_nvme_attach_controller" 00:22:09.456 } 00:22:09.456 EOF 00:22:09.456 )") 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.456 { 00:22:09.456 "params": { 00:22:09.456 "name": "Nvme$subsystem", 00:22:09.456 "trtype": "$TEST_TRANSPORT", 00:22:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.456 "adrfam": "ipv4", 00:22:09.456 "trsvcid": "$NVMF_PORT", 00:22:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.456 "hdgst": ${hdgst:-false}, 00:22:09.456 "ddgst": ${ddgst:-false} 00:22:09.456 }, 00:22:09.456 "method": "bdev_nvme_attach_controller" 00:22:09.456 } 00:22:09.456 EOF 00:22:09.456 )") 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.456 { 00:22:09.456 "params": { 00:22:09.456 "name": "Nvme$subsystem", 00:22:09.456 "trtype": "$TEST_TRANSPORT", 00:22:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.456 "adrfam": "ipv4", 00:22:09.456 "trsvcid": "$NVMF_PORT", 00:22:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.456 "hdgst": ${hdgst:-false}, 00:22:09.456 "ddgst": ${ddgst:-false} 00:22:09.456 }, 00:22:09.456 "method": "bdev_nvme_attach_controller" 00:22:09.456 } 00:22:09.456 EOF 00:22:09.456 )") 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.456 [2024-11-27 05:43:57.400744] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:09.456 [2024-11-27 05:43:57.400791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819803 ] 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.456 { 00:22:09.456 "params": { 00:22:09.456 "name": "Nvme$subsystem", 00:22:09.456 "trtype": "$TEST_TRANSPORT", 00:22:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.456 "adrfam": "ipv4", 00:22:09.456 "trsvcid": "$NVMF_PORT", 00:22:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.456 "hdgst": ${hdgst:-false}, 00:22:09.456 "ddgst": ${ddgst:-false} 00:22:09.456 }, 00:22:09.456 "method": "bdev_nvme_attach_controller" 00:22:09.456 } 00:22:09.456 EOF 00:22:09.456 )") 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.456 { 00:22:09.456 "params": { 00:22:09.456 "name": "Nvme$subsystem", 00:22:09.456 "trtype": "$TEST_TRANSPORT", 00:22:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.456 "adrfam": "ipv4", 00:22:09.456 "trsvcid": "$NVMF_PORT", 00:22:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.456 "hdgst": ${hdgst:-false}, 00:22:09.456 "ddgst": ${ddgst:-false} 00:22:09.456 }, 00:22:09.456 "method": "bdev_nvme_attach_controller" 00:22:09.456 } 00:22:09.456 EOF 00:22:09.456 )") 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:09.456 { 00:22:09.456 "params": { 00:22:09.456 "name": "Nvme$subsystem", 00:22:09.456 "trtype": "$TEST_TRANSPORT", 00:22:09.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.456 "adrfam": "ipv4", 00:22:09.456 "trsvcid": "$NVMF_PORT", 00:22:09.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.456 "hdgst": ${hdgst:-false}, 00:22:09.456 "ddgst": ${ddgst:-false} 00:22:09.456 }, 00:22:09.456 "method": "bdev_nvme_attach_controller" 00:22:09.456 } 00:22:09.456 EOF 00:22:09.456 )") 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:09.456 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:09.457 05:43:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme1", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme2", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme3", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme4", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme5", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme6", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme7", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme8", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme9", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 },{ 00:22:09.457 "params": { 00:22:09.457 "name": "Nvme10", 00:22:09.457 "trtype": "tcp", 00:22:09.457 "traddr": "10.0.0.2", 00:22:09.457 "adrfam": "ipv4", 00:22:09.457 "trsvcid": "4420", 00:22:09.457 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:09.457 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:09.457 "hdgst": false, 00:22:09.457 "ddgst": false 00:22:09.457 }, 00:22:09.457 "method": "bdev_nvme_attach_controller" 00:22:09.457 }' 00:22:09.716 [2024-11-27 05:43:57.476059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.716 [2024-11-27 05:43:57.516847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.094 Running I/O for 1 seconds... 00:22:12.295 2260.00 IOPS, 141.25 MiB/s 00:22:12.295 Latency(us) 00:22:12.295 [2024-11-27T04:44:00.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.295 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme1n1 : 1.13 287.14 17.95 0.00 0.00 220368.28 7177.75 210713.84 00:22:12.295 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme2n1 : 1.09 234.95 14.68 0.00 0.00 266158.81 18599.74 241671.80 00:22:12.295 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme3n1 : 1.13 287.96 18.00 0.00 0.00 213007.99 7552.24 212711.13 00:22:12.295 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme4n1 : 1.12 289.88 18.12 0.00 0.00 205344.40 14730.00 209715.20 00:22:12.295 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme5n1 : 1.15 278.34 17.40 0.00 0.00 215692.63 17351.44 230686.72 00:22:12.295 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme6n1 : 1.14 280.53 17.53 0.00 0.00 210359.15 15416.56 213709.78 00:22:12.295 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme7n1 : 1.14 280.77 17.55 0.00 0.00 207531.25 27088.21 215707.06 00:22:12.295 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme8n1 : 1.15 279.06 17.44 0.00 0.00 205767.14 14667.58 215707.06 00:22:12.295 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme9n1 : 1.16 277.00 17.31 0.00 0.00 204500.31 14293.09 222697.57 00:22:12.295 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:12.295 Verification LBA range: start 0x0 length 0x400 00:22:12.295 Nvme10n1 : 1.15 282.14 17.63 0.00 0.00 197490.07 16227.96 232684.01 00:22:12.295 [2024-11-27T04:44:00.299Z] =================================================================================================================== 00:22:12.295 [2024-11-27T04:44:00.299Z] Total : 2777.78 173.61 0.00 0.00 213541.60 7177.75 241671.80 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:12.554 rmmod nvme_tcp 00:22:12.554 rmmod nvme_fabrics 00:22:12.554 rmmod nvme_keyring 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1819254 ']' 00:22:12.554 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1819254 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1819254 ']' 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1819254 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1819254 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1819254' 00:22:12.555 killing process with pid 1819254 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1819254 00:22:12.555 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1819254 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.124 05:44:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:15.033 00:22:15.033 real 0m14.877s 00:22:15.033 user 0m32.020s 00:22:15.033 sys 0m5.872s 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:15.033 ************************************ 00:22:15.033 END TEST nvmf_shutdown_tc1 00:22:15.033 ************************************ 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:15.033 ************************************ 00:22:15.033 START TEST nvmf_shutdown_tc2 00:22:15.033 ************************************ 00:22:15.033 05:44:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.033 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:15.034 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:15.034 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:15.034 Found net devices under 0000:86:00.0: cvl_0_0 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:15.034 Found net devices under 0000:86:00.1: cvl_0_1 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.034 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.414 ms 00:22:15.293 00:22:15.293 --- 10.0.0.2 ping statistics --- 00:22:15.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.293 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:22:15.293 00:22:15.293 --- 10.0.0.1 ping statistics --- 00:22:15.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.293 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:15.293 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.294 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1820824 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1820824 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1820824 ']' 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.553 05:44:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.553 [2024-11-27 05:44:03.358373] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:15.553 [2024-11-27 05:44:03.358418] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.553 [2024-11-27 05:44:03.437704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:15.553 [2024-11-27 05:44:03.482032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.553 [2024-11-27 05:44:03.482063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.553 [2024-11-27 05:44:03.482071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.553 [2024-11-27 05:44:03.482078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.553 [2024-11-27 05:44:03.482083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.553 [2024-11-27 05:44:03.483637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.553 [2024-11-27 05:44:03.483746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:15.553 [2024-11-27 05:44:03.483806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.553 [2024-11-27 05:44:03.483807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.490 [2024-11-27 05:44:04.239436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.490 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.491 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:16.491 Malloc1 00:22:16.491 [2024-11-27 05:44:04.354109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.491 Malloc2 00:22:16.491 Malloc3 00:22:16.491 Malloc4 00:22:16.750 Malloc5 00:22:16.750 Malloc6 00:22:16.750 Malloc7 00:22:16.750 Malloc8 00:22:16.750 Malloc9 00:22:16.750 Malloc10 00:22:16.750 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.750 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:16.750 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.750 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1821107 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1821107 /var/tmp/bdevperf.sock 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1821107 ']' 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.010 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.010 { 00:22:17.010 "params": { 00:22:17.010 "name": "Nvme$subsystem", 00:22:17.010 "trtype": "$TEST_TRANSPORT", 00:22:17.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.010 "adrfam": "ipv4", 00:22:17.010 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 [2024-11-27 05:44:04.828047] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:17.011 [2024-11-27 05:44:04.828095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821107 ] 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:17.011 { 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme$subsystem", 00:22:17.011 "trtype": "$TEST_TRANSPORT", 00:22:17.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "$NVMF_PORT", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:17.011 "hdgst": ${hdgst:-false}, 00:22:17.011 "ddgst": ${ddgst:-false} 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 } 00:22:17.011 EOF 00:22:17.011 )") 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:17.011 05:44:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme1", 00:22:17.011 "trtype": "tcp", 00:22:17.011 "traddr": "10.0.0.2", 00:22:17.011 "adrfam": "ipv4", 00:22:17.011 "trsvcid": "4420", 00:22:17.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.011 "hdgst": false, 00:22:17.011 "ddgst": false 00:22:17.011 }, 00:22:17.011 "method": "bdev_nvme_attach_controller" 00:22:17.011 },{ 00:22:17.011 "params": { 00:22:17.011 "name": "Nvme2", 00:22:17.011 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme3", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme4", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme5", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme6", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme7", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme8", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme9", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 },{ 00:22:17.012 "params": { 00:22:17.012 "name": "Nvme10", 00:22:17.012 "trtype": "tcp", 00:22:17.012 "traddr": "10.0.0.2", 00:22:17.012 "adrfam": "ipv4", 00:22:17.012 "trsvcid": "4420", 00:22:17.012 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:17.012 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:17.012 "hdgst": false, 00:22:17.012 "ddgst": false 00:22:17.012 }, 00:22:17.012 "method": "bdev_nvme_attach_controller" 00:22:17.012 }' 00:22:17.012 [2024-11-27 05:44:04.904290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.012 [2024-11-27 05:44:04.945334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.391 Running I/O for 10 seconds... 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=137 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 137 -ge 100 ']' 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1821107 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1821107 ']' 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1821107 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1821107 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1821107' 00:22:18.959 killing process with pid 1821107 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1821107 00:22:18.959 05:44:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1821107 00:22:18.959 Received shutdown signal, test time was about 0.736626 seconds 00:22:18.959 00:22:18.959 Latency(us) 00:22:18.959 [2024-11-27T04:44:06.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.959 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme1n1 : 0.74 347.86 21.74 0.00 0.00 180502.43 16477.62 175761.31 00:22:18.959 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme2n1 : 0.71 275.94 17.25 0.00 0.00 221881.71 4462.69 210713.84 00:22:18.959 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme3n1 : 0.71 271.65 16.98 0.00 0.00 222093.41 15291.73 214708.42 00:22:18.959 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme4n1 : 0.71 294.17 18.39 0.00 0.00 196857.81 12233.39 193736.90 00:22:18.959 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme5n1 : 0.73 261.61 16.35 0.00 0.00 220841.04 18474.91 245666.38 00:22:18.959 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme6n1 : 0.70 273.60 17.10 0.00 0.00 204778.87 18225.25 226692.14 00:22:18.959 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme7n1 : 0.72 267.49 16.72 0.00 0.00 205253.16 15603.81 208716.56 00:22:18.959 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme8n1 : 0.72 265.53 16.60 0.00 0.00 201748.81 25465.42 204721.98 00:22:18.959 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme9n1 : 0.73 264.26 16.52 0.00 0.00 197776.99 15666.22 218702.99 00:22:18.959 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.959 Verification LBA range: start 0x0 length 0x400 00:22:18.959 Nvme10n1 : 0.73 262.69 16.42 0.00 0.00 194028.50 29085.50 225693.50 00:22:18.959 [2024-11-27T04:44:06.963Z] =================================================================================================================== 00:22:18.959 [2024-11-27T04:44:06.963Z] Total : 2784.80 174.05 0.00 0.00 203773.46 4462.69 245666.38 00:22:19.219 05:44:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1820824 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.157 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.157 rmmod nvme_tcp 00:22:20.157 rmmod nvme_fabrics 00:22:20.416 rmmod nvme_keyring 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1820824 ']' 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1820824 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1820824 ']' 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1820824 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1820824 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1820824' 00:22:20.417 killing process with pid 1820824 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1820824 00:22:20.417 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1820824 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.675 05:44:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.212 00:22:23.212 real 0m7.701s 00:22:23.212 user 0m22.927s 00:22:23.212 sys 0m1.293s 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:23.212 ************************************ 00:22:23.212 END TEST nvmf_shutdown_tc2 00:22:23.212 ************************************ 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:23.212 ************************************ 00:22:23.212 START TEST nvmf_shutdown_tc3 00:22:23.212 ************************************ 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.212 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:23.213 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:23.213 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:23.213 Found net devices under 0000:86:00.0: cvl_0_0 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:23.213 Found net devices under 0000:86:00.1: cvl_0_1 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.213 05:44:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.213 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.213 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.213 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.213 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:22:23.213 00:22:23.213 --- 10.0.0.2 ping statistics --- 00:22:23.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.213 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:23.214 00:22:23.214 --- 10.0.0.1 ping statistics --- 00:22:23.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.214 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1822358 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1822358 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1822358 ']' 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.214 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.214 [2024-11-27 05:44:11.142557] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:23.214 [2024-11-27 05:44:11.142602] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.472 [2024-11-27 05:44:11.218977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.472 [2024-11-27 05:44:11.260923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.472 [2024-11-27 05:44:11.260968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.472 [2024-11-27 05:44:11.260975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.472 [2024-11-27 05:44:11.260981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.472 [2024-11-27 05:44:11.260986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.472 [2024-11-27 05:44:11.262610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.472 [2024-11-27 05:44:11.262719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.472 [2024-11-27 05:44:11.262824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.472 [2024-11-27 05:44:11.262825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.039 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.040 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:24.040 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.040 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.040 05:44:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.040 [2024-11-27 05:44:12.019006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.040 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.298 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.298 Malloc1 00:22:24.298 [2024-11-27 05:44:12.126549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.298 Malloc2 00:22:24.298 Malloc3 00:22:24.298 Malloc4 00:22:24.298 Malloc5 00:22:24.555 Malloc6 00:22:24.555 Malloc7 00:22:24.555 Malloc8 00:22:24.555 Malloc9 00:22:24.555 Malloc10 00:22:24.555 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.555 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:24.555 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.555 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.555 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1822635 00:22:24.555 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1822635 /var/tmp/bdevperf.sock 00:22:24.555 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1822635 ']' 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.556 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.556 { 00:22:24.556 "params": { 00:22:24.556 "name": "Nvme$subsystem", 00:22:24.556 "trtype": "$TEST_TRANSPORT", 00:22:24.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.556 "adrfam": "ipv4", 00:22:24.556 "trsvcid": "$NVMF_PORT", 00:22:24.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.556 "hdgst": ${hdgst:-false}, 00:22:24.556 "ddgst": ${ddgst:-false} 00:22:24.556 }, 00:22:24.556 "method": "bdev_nvme_attach_controller" 00:22:24.556 } 00:22:24.556 EOF 00:22:24.556 )") 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.815 "method": "bdev_nvme_attach_controller" 00:22:24.815 } 00:22:24.815 EOF 00:22:24.815 )") 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.815 "method": "bdev_nvme_attach_controller" 00:22:24.815 } 00:22:24.815 EOF 00:22:24.815 )") 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.815 "method": "bdev_nvme_attach_controller" 00:22:24.815 } 00:22:24.815 EOF 00:22:24.815 )") 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.815 "method": "bdev_nvme_attach_controller" 00:22:24.815 } 00:22:24.815 EOF 00:22:24.815 )") 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.815 "method": "bdev_nvme_attach_controller" 00:22:24.815 } 00:22:24.815 EOF 00:22:24.815 )") 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.815 "method": "bdev_nvme_attach_controller" 00:22:24.815 } 00:22:24.815 EOF 00:22:24.815 )") 00:22:24.815 [2024-11-27 05:44:12.598071] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:24.815 [2024-11-27 05:44:12.598119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1822635 ] 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.815 "method": "bdev_nvme_attach_controller" 00:22:24.815 } 00:22:24.815 EOF 00:22:24.815 )") 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.815 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.815 { 00:22:24.815 "params": { 00:22:24.815 "name": "Nvme$subsystem", 00:22:24.815 "trtype": "$TEST_TRANSPORT", 00:22:24.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.815 "adrfam": "ipv4", 00:22:24.815 "trsvcid": "$NVMF_PORT", 00:22:24.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.815 "hdgst": ${hdgst:-false}, 00:22:24.815 "ddgst": ${ddgst:-false} 00:22:24.815 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 } 00:22:24.816 EOF 00:22:24.816 )") 00:22:24.816 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.816 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:24.816 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:24.816 { 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme$subsystem", 00:22:24.816 "trtype": "$TEST_TRANSPORT", 00:22:24.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "$NVMF_PORT", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.816 "hdgst": ${hdgst:-false}, 00:22:24.816 "ddgst": ${ddgst:-false} 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 } 00:22:24.816 EOF 00:22:24.816 )") 00:22:24.816 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:24.816 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:24.816 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:24.816 05:44:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme1", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme2", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme3", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme4", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme5", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme6", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme7", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme8", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme9", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 },{ 00:22:24.816 "params": { 00:22:24.816 "name": "Nvme10", 00:22:24.816 "trtype": "tcp", 00:22:24.816 "traddr": "10.0.0.2", 00:22:24.816 "adrfam": "ipv4", 00:22:24.816 "trsvcid": "4420", 00:22:24.816 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:24.816 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:24.816 "hdgst": false, 00:22:24.816 "ddgst": false 00:22:24.816 }, 00:22:24.816 "method": "bdev_nvme_attach_controller" 00:22:24.816 }' 00:22:24.816 [2024-11-27 05:44:12.674462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.816 [2024-11-27 05:44:12.715412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.232 Running I/O for 10 seconds... 00:22:26.491 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.492 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:26.750 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.750 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:26.750 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:26.750 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1822358 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1822358 ']' 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1822358 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1822358 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1822358' 00:22:27.026 killing process with pid 1822358 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1822358 00:22:27.026 05:44:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1822358 00:22:27.026 [2024-11-27 05:44:14.889108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.026 [2024-11-27 05:44:14.889443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.889545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142a850 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.027 [2024-11-27 05:44:14.891471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.891530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2e30 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.893996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.894002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.894008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.894013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.894019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.894025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.894031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.028 [2024-11-27 05:44:14.894037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b1f0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d471c0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b200 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46d30 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.029 [2024-11-27 05:44:14.894803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21af8b0 is same with the state(6) to be set 00:22:27.029 [2024-11-27 05:44:14.894931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.894944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.894965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.894981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.894989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.894996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.895004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.895012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.895021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.895028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.895036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.895043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.895054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.895061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.895069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.895075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.895084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.029 [2024-11-27 05:44:14.895090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.029 [2024-11-27 05:44:14.895098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with [2024-11-27 05:44:14.895330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1the state(6) to be set 00:22:27.030 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-11-27 05:44:14.895383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.895393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with [2024-11-27 05:44:14.895403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1the state(6) to be set 00:22:27.030 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.030 [2024-11-27 05:44:14.895440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.030 [2024-11-27 05:44:14.895447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.030 [2024-11-27 05:44:14.895455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with [2024-11-27 05:44:14.895455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1the state(6) to be set 00:22:27.030 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-11-27 05:44:14.895491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.895501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1[2024-11-27 05:44:14.895596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.895604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with [2024-11-27 05:44:14.895615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(6) to be set 00:22:27.031 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with [2024-11-27 05:44:14.895624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:27.031 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.031 [2024-11-27 05:44:14.895654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.031 [2024-11-27 05:44:14.895658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.031 [2024-11-27 05:44:14.895662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-11-27 05:44:14.895674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.895683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.895721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with [2024-11-27 05:44:14.895812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(6) to be set 00:22:27.032 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142b6e0 is same with the state(6) to be set 00:22:27.032 [2024-11-27 05:44:14.895826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.895976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.895983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.896393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.896413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.896424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.896434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.896443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.896449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.896457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.032 [2024-11-27 05:44:14.896464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.032 [2024-11-27 05:44:14.896472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.896632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.896673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.033 [2024-11-27 05:44:14.896783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.033 [2024-11-27 05:44:14.896789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.033 [2024-11-27 05:44:14.896794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:12[2024-11-27 05:44:14.896803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with [2024-11-27 05:44:14.896813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:27.034 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with [2024-11-27 05:44:14.896909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:12the state(6) to be set 00:22:27.034 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with [2024-11-27 05:44:14.896966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12the state(6) to be set 00:22:27.034 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.896988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.896992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.896995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.897000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.897002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.897008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.897010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.897017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with [2024-11-27 05:44:14.897017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:12the state(6) to be set 00:22:27.034 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.897025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.897026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.897033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.897038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.034 [2024-11-27 05:44:14.897040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.034 [2024-11-27 05:44:14.897045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.034 [2024-11-27 05:44:14.897050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.897055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.897062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.897071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.897079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-27 05:44:14.897080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142bbb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.897090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.897345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.035 [2024-11-27 05:44:14.897353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.035 [2024-11-27 05:44:14.898095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.035 [2024-11-27 05:44:14.898217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.898492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c0a0 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.036 [2024-11-27 05:44:14.899719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.899974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.900451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c570 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.037 [2024-11-27 05:44:14.901387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.901875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.911059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.038 [2024-11-27 05:44:14.911084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.038 [2024-11-27 05:44:14.911104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.038 [2024-11-27 05:44:14.911122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.038 [2024-11-27 05:44:14.911141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.038 [2024-11-27 05:44:14.911161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.038 [2024-11-27 05:44:14.911180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d471c0 (9): Bad file descriptor 00:22:27.038 [2024-11-27 05:44:14.911329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b200 (9): Bad file descriptor 00:22:27.038 [2024-11-27 05:44:14.911347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46d30 (9): Bad file descriptor 00:22:27.038 [2024-11-27 05:44:14.911383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b120 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.911486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5b610 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.911586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2167c60 is same with the state(6) to be set 00:22:27.038 [2024-11-27 05:44:14.911697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21af8b0 (9): Bad file descriptor 00:22:27.038 [2024-11-27 05:44:14.911725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.038 [2024-11-27 05:44:14.911769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.038 [2024-11-27 05:44:14.911778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.039 [2024-11-27 05:44:14.911786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.039 [2024-11-27 05:44:14.911794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46300 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.911822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.039 [2024-11-27 05:44:14.911833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.039 [2024-11-27 05:44:14.911842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.039 [2024-11-27 05:44:14.911850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.039 [2024-11-27 05:44:14.911859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.039 [2024-11-27 05:44:14.911868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.039 [2024-11-27 05:44:14.911876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.039 [2024-11-27 05:44:14.911885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.039 [2024-11-27 05:44:14.911893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21724c0 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.914816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:27.039 [2024-11-27 05:44:14.915362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:27.039 [2024-11-27 05:44:14.915514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.039 [2024-11-27 05:44:14.915537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3b200 with addr=10.0.0.2, port=4420 00:22:27.039 [2024-11-27 05:44:14.915551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b200 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.916542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142ca60 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.039 [2024-11-27 05:44:14.917353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2960 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.040 [2024-11-27 05:44:14.917719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21af8b0 with addr=10.0.0.2, port=4420 00:22:27.040 [2024-11-27 05:44:14.917732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21af8b0 is same with the state(6) to be set 00:22:27.040 [2024-11-27 05:44:14.917750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b200 (9): Bad file descriptor 00:22:27.040 [2024-11-27 05:44:14.917812] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:27.040 [2024-11-27 05:44:14.918056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.040 [2024-11-27 05:44:14.918077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.040 [2024-11-27 05:44:14.918100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.040 [2024-11-27 05:44:14.918113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.040 [2024-11-27 05:44:14.918132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.040 [2024-11-27 05:44:14.918144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.040 [2024-11-27 05:44:14.918159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.040 [2024-11-27 05:44:14.918170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.040 [2024-11-27 05:44:14.918185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.041 [2024-11-27 05:44:14.918589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.918602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4d270 is same with the state(6) to be set 00:22:27.041 [2024-11-27 05:44:14.919057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21af8b0 (9): Bad file descriptor 00:22:27.041 [2024-11-27 05:44:14.919087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:27.041 [2024-11-27 05:44:14.919098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:27.041 [2024-11-27 05:44:14.919111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:27.041 [2024-11-27 05:44:14.919125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:27.041 [2024-11-27 05:44:14.920500] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:27.041 [2024-11-27 05:44:14.920566] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:27.041 [2024-11-27 05:44:14.920627] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:27.041 [2024-11-27 05:44:14.920691] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:27.041 [2024-11-27 05:44:14.920838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:27.041 [2024-11-27 05:44:14.920878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:27.041 [2024-11-27 05:44:14.920890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:27.041 [2024-11-27 05:44:14.920902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:27.041 [2024-11-27 05:44:14.920914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:27.041 [2024-11-27 05:44:14.921235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.041 [2024-11-27 05:44:14.921259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d46d30 with addr=10.0.0.2, port=4420 00:22:27.041 [2024-11-27 05:44:14.921272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46d30 is same with the state(6) to be set 00:22:27.041 [2024-11-27 05:44:14.921740] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:27.041 [2024-11-27 05:44:14.921822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46d30 (9): Bad file descriptor 00:22:27.041 [2024-11-27 05:44:14.921859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216b120 (9): Bad file descriptor 00:22:27.041 [2024-11-27 05:44:14.921889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5b610 (9): Bad file descriptor 00:22:27.041 [2024-11-27 05:44:14.921915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2167c60 (9): Bad file descriptor 00:22:27.041 [2024-11-27 05:44:14.921941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46300 (9): Bad file descriptor 00:22:27.041 [2024-11-27 05:44:14.921966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21724c0 (9): Bad file descriptor 00:22:27.041 [2024-11-27 05:44:14.922010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.041 [2024-11-27 05:44:14.922026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.041 [2024-11-27 05:44:14.922039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.041 [2024-11-27 05:44:14.922051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.042 [2024-11-27 05:44:14.922075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.042 [2024-11-27 05:44:14.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bf920 is same with the state(6) to be set 00:22:27.042 [2024-11-27 05:44:14.922255] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:27.042 [2024-11-27 05:44:14.922286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:27.042 [2024-11-27 05:44:14.922299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:27.042 [2024-11-27 05:44:14.922312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:27.042 [2024-11-27 05:44:14.922323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:27.042 [2024-11-27 05:44:14.922387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.922974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.922988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.923000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.923014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.923026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.923040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.923051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.923065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.042 [2024-11-27 05:44:14.923077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.042 [2024-11-27 05:44:14.923091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.043 [2024-11-27 05:44:14.923756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.043 [2024-11-27 05:44:14.923771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.923981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.923992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.924006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.924018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.924032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.924044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.924057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.924069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.924081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f4b1a0 is same with the state(6) to be set 00:22:27.044 [2024-11-27 05:44:14.925286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:27.044 [2024-11-27 05:44:14.925517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.044 [2024-11-27 05:44:14.925532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d471c0 with addr=10.0.0.2, port=4420 00:22:27.044 [2024-11-27 05:44:14.925545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d471c0 is same with the state(6) to be set 00:22:27.044 [2024-11-27 05:44:14.925835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:27.044 [2024-11-27 05:44:14.925856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d471c0 (9): Bad file descriptor 00:22:27.044 [2024-11-27 05:44:14.926010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.044 [2024-11-27 05:44:14.926025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3b200 with addr=10.0.0.2, port=4420 00:22:27.044 [2024-11-27 05:44:14.926033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b200 is same with the state(6) to be set 00:22:27.044 [2024-11-27 05:44:14.926042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:27.044 [2024-11-27 05:44:14.926049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:27.044 [2024-11-27 05:44:14.926057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:27.044 [2024-11-27 05:44:14.926065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:27.044 [2024-11-27 05:44:14.926106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:27.044 [2024-11-27 05:44:14.926123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b200 (9): Bad file descriptor 00:22:27.044 [2024-11-27 05:44:14.926244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.044 [2024-11-27 05:44:14.926257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21af8b0 with addr=10.0.0.2, port=4420 00:22:27.044 [2024-11-27 05:44:14.926265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21af8b0 is same with the state(6) to be set 00:22:27.044 [2024-11-27 05:44:14.926273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:27.044 [2024-11-27 05:44:14.926280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:27.044 [2024-11-27 05:44:14.926288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:27.044 [2024-11-27 05:44:14.926295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:27.044 [2024-11-27 05:44:14.926330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21af8b0 (9): Bad file descriptor 00:22:27.044 [2024-11-27 05:44:14.926364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:27.044 [2024-11-27 05:44:14.926371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:27.044 [2024-11-27 05:44:14.926378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:27.044 [2024-11-27 05:44:14.926384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:27.044 [2024-11-27 05:44:14.930963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:27.044 [2024-11-27 05:44:14.931146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.044 [2024-11-27 05:44:14.931160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d46d30 with addr=10.0.0.2, port=4420 00:22:27.044 [2024-11-27 05:44:14.931169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46d30 is same with the state(6) to be set 00:22:27.044 [2024-11-27 05:44:14.931202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46d30 (9): Bad file descriptor 00:22:27.044 [2024-11-27 05:44:14.931240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:27.044 [2024-11-27 05:44:14.931248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:27.044 [2024-11-27 05:44:14.931256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:27.044 [2024-11-27 05:44:14.931263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:27.044 [2024-11-27 05:44:14.931875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bf920 (9): Bad file descriptor 00:22:27.044 [2024-11-27 05:44:14.931972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.044 [2024-11-27 05:44:14.931985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.044 [2024-11-27 05:44:14.931998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.045 [2024-11-27 05:44:14.932517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.045 [2024-11-27 05:44:14.932527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.932984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.932994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.933001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.933011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.933019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.933029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.933037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.933046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.933056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.046 [2024-11-27 05:44:14.933066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.046 [2024-11-27 05:44:14.933073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.933083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.933091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.933100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.933108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.933116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214b490 is same with the state(6) to be set 00:22:27.047 [2024-11-27 05:44:14.934275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.047 [2024-11-27 05:44:14.934743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.047 [2024-11-27 05:44:14.934751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.934989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.934997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.048 [2024-11-27 05:44:14.935111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.048 [2024-11-27 05:44:14.935117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.935132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.935150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.935164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.935178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.935193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.935207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.935221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.935228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c690 is same with the state(6) to be set 00:22:27.049 [2024-11-27 05:44:14.936220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.049 [2024-11-27 05:44:14.936555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.049 [2024-11-27 05:44:14.936561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.936987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.936995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.937001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.937010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.937016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.937023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.050 [2024-11-27 05:44:14.937030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.050 [2024-11-27 05:44:14.937038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.937167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.937174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214d9a0 is same with the state(6) to be set 00:22:27.051 [2024-11-27 05:44:14.938160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.051 [2024-11-27 05:44:14.938457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.051 [2024-11-27 05:44:14.938465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.052 [2024-11-27 05:44:14.938927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.052 [2024-11-27 05:44:14.938935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.938949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.938955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.938964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.938970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.938979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.938986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.938994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.939099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.939106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214ecb0 is same with the state(6) to be set 00:22:27.053 [2024-11-27 05:44:14.940069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.053 [2024-11-27 05:44:14.940378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.053 [2024-11-27 05:44:14.940386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.054 [2024-11-27 05:44:14.940789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.054 [2024-11-27 05:44:14.940796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.940989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.940997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.941004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.941011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.941018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.941025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x30958f0 is same with the state(6) to be set 00:22:27.055 [2024-11-27 05:44:14.941973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:27.055 [2024-11-27 05:44:14.941990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:27.055 [2024-11-27 05:44:14.942000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:27.055 [2024-11-27 05:44:14.942013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:27.055 [2024-11-27 05:44:14.942080] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:27.055 [2024-11-27 05:44:14.942147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:27.055 [2024-11-27 05:44:14.942293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.055 [2024-11-27 05:44:14.942305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d46300 with addr=10.0.0.2, port=4420 00:22:27.055 [2024-11-27 05:44:14.942313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46300 is same with the state(6) to be set 00:22:27.055 [2024-11-27 05:44:14.942509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.055 [2024-11-27 05:44:14.942519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21724c0 with addr=10.0.0.2, port=4420 00:22:27.055 [2024-11-27 05:44:14.942526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21724c0 is same with the state(6) to be set 00:22:27.055 [2024-11-27 05:44:14.942603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.055 [2024-11-27 05:44:14.942613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2167c60 with addr=10.0.0.2, port=4420 00:22:27.055 [2024-11-27 05:44:14.942620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2167c60 is same with the state(6) to be set 00:22:27.055 [2024-11-27 05:44:14.942761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.055 [2024-11-27 05:44:14.942771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b610 with addr=10.0.0.2, port=4420 00:22:27.055 [2024-11-27 05:44:14.942778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5b610 is same with the state(6) to be set 00:22:27.055 [2024-11-27 05:44:14.943921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.943934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.943946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.943953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.943961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.943968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.943976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.943982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.943990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.943997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.944005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.944011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.055 [2024-11-27 05:44:14.944023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.055 [2024-11-27 05:44:14.944029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.056 [2024-11-27 05:44:14.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.056 [2024-11-27 05:44:14.944513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.057 [2024-11-27 05:44:14.944864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.057 [2024-11-27 05:44:14.944871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8ad10 is same with the state(6) to be set 00:22:27.057 [2024-11-27 05:44:14.945820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:27.057 [2024-11-27 05:44:14.945836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:27.057 [2024-11-27 05:44:14.945845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:27.057 [2024-11-27 05:44:14.945854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:27.057 task offset: 25088 on job bdev=Nvme2n1 fails 00:22:27.057 00:22:27.057 Latency(us) 00:22:27.057 [2024-11-27T04:44:15.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.057 Job: Nvme1n1 ended in about 0.72 seconds with error 00:22:27.057 Verification LBA range: start 0x0 length 0x400 00:22:27.057 Nvme1n1 : 0.72 177.48 11.09 88.74 0.00 237483.15 18849.40 193736.90 00:22:27.057 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.057 Job: Nvme2n1 ended in about 0.71 seconds with error 00:22:27.057 Verification LBA range: start 0x0 length 0x400 00:22:27.057 Nvme2n1 : 0.71 270.78 16.92 90.26 0.00 171157.21 16103.13 210713.84 00:22:27.057 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.057 Job: Nvme3n1 ended in about 0.72 seconds with error 00:22:27.057 Verification LBA range: start 0x0 length 0x400 00:22:27.057 Nvme3n1 : 0.72 265.33 16.58 27.93 0.00 205540.74 2434.19 225693.50 00:22:27.057 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.057 Job: Nvme4n1 ended in about 0.73 seconds with error 00:22:27.058 Verification LBA range: start 0x0 length 0x400 00:22:27.058 Nvme4n1 : 0.73 175.30 10.96 87.65 0.00 225033.75 13731.35 227690.79 00:22:27.058 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.058 Job: Nvme5n1 ended in about 0.73 seconds with error 00:22:27.058 Verification LBA range: start 0x0 length 0x400 00:22:27.058 Nvme5n1 : 0.73 174.81 10.93 87.40 0.00 220632.67 17226.61 214708.42 00:22:27.058 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.058 Job: Nvme6n1 ended in about 0.73 seconds with error 00:22:27.058 Verification LBA range: start 0x0 length 0x400 00:22:27.058 Nvme6n1 : 0.73 174.34 10.90 87.17 0.00 216124.14 19972.88 211712.49 00:22:27.058 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.058 Job: Nvme7n1 ended in about 0.74 seconds with error 00:22:27.058 Verification LBA range: start 0x0 length 0x400 00:22:27.058 Nvme7n1 : 0.74 173.89 10.87 86.94 0.00 211704.20 15853.47 211712.49 00:22:27.058 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.058 Job: Nvme8n1 ended in about 0.74 seconds with error 00:22:27.058 Verification LBA range: start 0x0 length 0x400 00:22:27.058 Nvme8n1 : 0.74 173.44 10.84 86.72 0.00 207148.37 17725.93 212711.13 00:22:27.058 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.058 Job: Nvme9n1 ended in about 0.74 seconds with error 00:22:27.058 Verification LBA range: start 0x0 length 0x400 00:22:27.058 Nvme9n1 : 0.74 172.54 10.78 86.27 0.00 203263.35 30458.64 218702.99 00:22:27.058 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:27.058 Job: Nvme10n1 ended in about 0.71 seconds with error 00:22:27.058 Verification LBA range: start 0x0 length 0x400 00:22:27.058 Nvme10n1 : 0.71 180.13 11.26 90.07 0.00 187840.12 15728.64 233682.65 00:22:27.058 [2024-11-27T04:44:15.062Z] =================================================================================================================== 00:22:27.058 [2024-11-27T04:44:15.062Z] Total : 1938.03 121.13 819.15 0.00 207368.59 2434.19 233682.65 00:22:27.058 [2024-11-27 05:44:14.975424] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:27.058 [2024-11-27 05:44:14.975475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:27.058 [2024-11-27 05:44:14.975731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.058 [2024-11-27 05:44:14.975749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x216b120 with addr=10.0.0.2, port=4420 00:22:27.058 [2024-11-27 05:44:14.975759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x216b120 is same with the state(6) to be set 00:22:27.058 [2024-11-27 05:44:14.975774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46300 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.975785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21724c0 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.975793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2167c60 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.975802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5b610 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.976015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.058 [2024-11-27 05:44:14.976029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d471c0 with addr=10.0.0.2, port=4420 00:22:27.058 [2024-11-27 05:44:14.976037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d471c0 is same with the state(6) to be set 00:22:27.058 [2024-11-27 05:44:14.976186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.058 [2024-11-27 05:44:14.976196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3b200 with addr=10.0.0.2, port=4420 00:22:27.058 [2024-11-27 05:44:14.976204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b200 is same with the state(6) to be set 00:22:27.058 [2024-11-27 05:44:14.976296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.058 [2024-11-27 05:44:14.976305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21af8b0 with addr=10.0.0.2, port=4420 00:22:27.058 [2024-11-27 05:44:14.976312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21af8b0 is same with the state(6) to be set 00:22:27.058 [2024-11-27 05:44:14.976393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.058 [2024-11-27 05:44:14.976403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d46d30 with addr=10.0.0.2, port=4420 00:22:27.058 [2024-11-27 05:44:14.976410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46d30 is same with the state(6) to be set 00:22:27.058 [2024-11-27 05:44:14.976574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.058 [2024-11-27 05:44:14.976584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf920 with addr=10.0.0.2, port=4420 00:22:27.058 [2024-11-27 05:44:14.976591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bf920 is same with the state(6) to be set 00:22:27.058 [2024-11-27 05:44:14.976599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x216b120 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.976608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:27.058 [2024-11-27 05:44:14.976614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:27.058 [2024-11-27 05:44:14.976622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:27.058 [2024-11-27 05:44:14.976630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:27.058 [2024-11-27 05:44:14.976639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:27.058 [2024-11-27 05:44:14.976644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:27.058 [2024-11-27 05:44:14.976650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:27.058 [2024-11-27 05:44:14.976656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:27.058 [2024-11-27 05:44:14.976662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:27.058 [2024-11-27 05:44:14.976668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:27.058 [2024-11-27 05:44:14.976682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:27.058 [2024-11-27 05:44:14.976688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:27.058 [2024-11-27 05:44:14.976694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:27.058 [2024-11-27 05:44:14.976700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:27.058 [2024-11-27 05:44:14.976706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:27.058 [2024-11-27 05:44:14.976712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:27.058 [2024-11-27 05:44:14.976766] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:27.058 [2024-11-27 05:44:14.977087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d471c0 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.977100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b200 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.977109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21af8b0 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.977117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46d30 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.977124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bf920 (9): Bad file descriptor 00:22:27.058 [2024-11-27 05:44:14.977132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:27.058 [2024-11-27 05:44:14.977138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:27.058 [2024-11-27 05:44:14.977144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:27.059 [2024-11-27 05:44:14.977192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:27.059 [2024-11-27 05:44:14.977200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:27.059 [2024-11-27 05:44:14.977207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:27.059 [2024-11-27 05:44:14.977233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.059 [2024-11-27 05:44:14.977490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b610 with addr=10.0.0.2, port=4420 00:22:27.059 [2024-11-27 05:44:14.977497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5b610 is same with the state(6) to be set 00:22:27.059 [2024-11-27 05:44:14.977566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.059 [2024-11-27 05:44:14.977575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2167c60 with addr=10.0.0.2, port=4420 00:22:27.059 [2024-11-27 05:44:14.977582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2167c60 is same with the state(6) to be set 00:22:27.059 [2024-11-27 05:44:14.977729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.059 [2024-11-27 05:44:14.977739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21724c0 with addr=10.0.0.2, port=4420 00:22:27.059 [2024-11-27 05:44:14.977745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21724c0 is same with the state(6) to be set 00:22:27.059 [2024-11-27 05:44:14.977832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.059 [2024-11-27 05:44:14.977841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d46300 with addr=10.0.0.2, port=4420 00:22:27.059 [2024-11-27 05:44:14.977848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46300 is same with the state(6) to be set 00:22:27.059 [2024-11-27 05:44:14.977875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5b610 (9): Bad file descriptor 00:22:27.059 [2024-11-27 05:44:14.977884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2167c60 (9): Bad file descriptor 00:22:27.059 [2024-11-27 05:44:14.977892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21724c0 (9): Bad file descriptor 00:22:27.059 [2024-11-27 05:44:14.977900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46300 (9): Bad file descriptor 00:22:27.059 [2024-11-27 05:44:14.977923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.977977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.977983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.977989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.977995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:27.059 [2024-11-27 05:44:14.978001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:27.059 [2024-11-27 05:44:14.978007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:27.059 [2024-11-27 05:44:14.978013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:27.059 [2024-11-27 05:44:14.978018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:27.319 05:44:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1822635 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1822635 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1822635 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:28.698 rmmod nvme_tcp 00:22:28.698 rmmod nvme_fabrics 00:22:28.698 rmmod nvme_keyring 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1822358 ']' 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1822358 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1822358 ']' 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1822358 00:22:28.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1822358) - No such process 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1822358 is not found' 00:22:28.698 Process with pid 1822358 is not found 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.698 05:44:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:30.604 00:22:30.604 real 0m7.681s 00:22:30.604 user 0m18.664s 00:22:30.604 sys 0m1.314s 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.604 ************************************ 00:22:30.604 END TEST nvmf_shutdown_tc3 00:22:30.604 ************************************ 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:30.604 ************************************ 00:22:30.604 START TEST nvmf_shutdown_tc4 00:22:30.604 ************************************ 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:30.604 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:30.605 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:30.605 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.605 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:30.606 Found net devices under 0000:86:00.0: cvl_0_0 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:30.606 Found net devices under 0000:86:00.1: cvl_0_1 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.606 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:22:30.865 00:22:30.865 --- 10.0.0.2 ping statistics --- 00:22:30.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.865 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:22:30.865 00:22:30.865 --- 10.0.0.1 ping statistics --- 00:22:30.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.865 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1823677 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1823677 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1823677 ']' 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.865 05:44:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:31.125 [2024-11-27 05:44:18.902457] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:31.125 [2024-11-27 05:44:18.902501] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.125 [2024-11-27 05:44:18.981407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.125 [2024-11-27 05:44:19.023036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.125 [2024-11-27 05:44:19.023073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.125 [2024-11-27 05:44:19.023080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.125 [2024-11-27 05:44:19.023086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.125 [2024-11-27 05:44:19.023091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.125 [2024-11-27 05:44:19.024548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.125 [2024-11-27 05:44:19.024692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.125 [2024-11-27 05:44:19.024810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.125 [2024-11-27 05:44:19.024811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:32.064 [2024-11-27 05:44:19.781026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.064 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.065 05:44:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:32.065 Malloc1 00:22:32.065 [2024-11-27 05:44:19.893846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:32.065 Malloc2 00:22:32.065 Malloc3 00:22:32.065 Malloc4 00:22:32.065 Malloc5 00:22:32.323 Malloc6 00:22:32.323 Malloc7 00:22:32.323 Malloc8 00:22:32.323 Malloc9 00:22:32.323 Malloc10 00:22:32.323 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.323 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:32.323 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.323 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:32.323 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1823963 00:22:32.323 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:32.323 05:44:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:32.582 [2024-11-27 05:44:20.400724] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1823677 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1823677 ']' 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1823677 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1823677 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1823677' 00:22:37.866 killing process with pid 1823677 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1823677 00:22:37.866 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1823677 00:22:37.866 [2024-11-27 05:44:25.391808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1801b80 is same with the state(6) to be set 00:22:37.866 [2024-11-27 05:44:25.391858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1801b80 is same with the state(6) to be set 00:22:37.867 [2024-11-27 05:44:25.391866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1801b80 is same with the state(6) to be set 00:22:37.867 [2024-11-27 05:44:25.391872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1801b80 is same with the state(6) to be set 00:22:37.867 [2024-11-27 05:44:25.391879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1801b80 is same with the state(6) to be set 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 [2024-11-27 05:44:25.393554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.867 NVMe io qpair process completion error 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 [2024-11-27 05:44:25.401505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.867 starting I/O failed: -6 00:22:37.867 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.402150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803b30 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.402178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803b30 is same with starting I/O failed: -6 00:22:37.868 the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.402188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803b30 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.402194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803b30 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.402201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803b30 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.402207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803b30 is same with Write completed with error (sct=0, sc=8) 00:22:37.868 the state(6) to be set 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.402406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.402504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.402527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.402534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.402541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.402547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.402554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.402561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.402567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804020 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.402958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804510 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.402980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804510 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.402988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804510 is same with Write completed with error (sct=0, sc=8) 00:22:37.868 the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.403000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804510 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.403007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804510 is same with the state(6) to be set 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.403014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804510 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.403317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803660 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.403340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803660 is same with the state(6) to be set 00:22:37.868 starting I/O failed: -6 00:22:37.868 [2024-11-27 05:44:25.403348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803660 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.403355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803660 is same with the state(6) to be set 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 [2024-11-27 05:44:25.403361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803660 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.403368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803660 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.403374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1803660 is same with the state(6) to be set 00:22:37.868 [2024-11-27 05:44:25.403379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.868 Write completed with error (sct=0, sc=8) 00:22:37.868 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 [2024-11-27 05:44:25.404837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804d80 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.404850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804d80 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.404857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804d80 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.404863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804d80 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.404870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804d80 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.404876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1804d80 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 [2024-11-27 05:44:25.404923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.869 NVMe io qpair process completion error 00:22:37.869 [2024-11-27 05:44:25.405183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805250 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 [2024-11-27 05:44:25.405632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 starting I/O failed: -6 00:22:37.869 [2024-11-27 05:44:25.405644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 [2024-11-27 05:44:25.405663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 [2024-11-27 05:44:25.405684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 [2024-11-27 05:44:25.405696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with Write completed with error (sct=0, sc=8) 00:22:37.869 the state(6) to be set 00:22:37.869 starting I/O failed: -6 00:22:37.869 [2024-11-27 05:44:25.405719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 [2024-11-27 05:44:25.405737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 [2024-11-27 05:44:25.405743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805720 is same with the state(6) to be set 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.869 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 [2024-11-27 05:44:25.405880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 [2024-11-27 05:44:25.406002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 [2024-11-27 05:44:25.406014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 [2024-11-27 05:44:25.406022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 [2024-11-27 05:44:25.406028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 starting I/O failed: -6 00:22:37.870 [2024-11-27 05:44:25.406035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 [2024-11-27 05:44:25.406041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 [2024-11-27 05:44:25.406047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 [2024-11-27 05:44:25.406053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18048b0 is same with the state(6) to be set 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 [2024-11-27 05:44:25.406740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 [2024-11-27 05:44:25.407719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.870 Write completed with error (sct=0, sc=8) 00:22:37.870 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 [2024-11-27 05:44:25.409503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.871 NVMe io qpair process completion error 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 [2024-11-27 05:44:25.410511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 starting I/O failed: -6 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.871 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 [2024-11-27 05:44:25.411413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 [2024-11-27 05:44:25.412389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.872 starting I/O failed: -6 00:22:37.872 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 [2024-11-27 05:44:25.414241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.873 NVMe io qpair process completion error 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 [2024-11-27 05:44:25.415268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 [2024-11-27 05:44:25.416066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 Write completed with error (sct=0, sc=8) 00:22:37.873 starting I/O failed: -6 00:22:37.873 [2024-11-27 05:44:25.417095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 [2024-11-27 05:44:25.420852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.874 NVMe io qpair process completion error 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 [2024-11-27 05:44:25.421861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.874 starting I/O failed: -6 00:22:37.874 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 [2024-11-27 05:44:25.422746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 [2024-11-27 05:44:25.423716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.875 starting I/O failed: -6 00:22:37.875 starting I/O failed: -6 00:22:37.875 starting I/O failed: -6 00:22:37.875 starting I/O failed: -6 00:22:37.875 starting I/O failed: -6 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.875 Write completed with error (sct=0, sc=8) 00:22:37.875 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 [2024-11-27 05:44:25.428278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.876 NVMe io qpair process completion error 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 [2024-11-27 05:44:25.429271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 [2024-11-27 05:44:25.430144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 Write completed with error (sct=0, sc=8) 00:22:37.876 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 [2024-11-27 05:44:25.431175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 starting I/O failed: -6 00:22:37.877 [2024-11-27 05:44:25.432702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.877 NVMe io qpair process completion error 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.877 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 [2024-11-27 05:44:25.435975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 [2024-11-27 05:44:25.436877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 Write completed with error (sct=0, sc=8) 00:22:37.878 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 [2024-11-27 05:44:25.437877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 [2024-11-27 05:44:25.443659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.879 NVMe io qpair process completion error 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 [2024-11-27 05:44:25.444662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.879 starting I/O failed: -6 00:22:37.879 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 [2024-11-27 05:44:25.445567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 [2024-11-27 05:44:25.446592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.880 Write completed with error (sct=0, sc=8) 00:22:37.880 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 [2024-11-27 05:44:25.450553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.881 NVMe io qpair process completion error 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 [2024-11-27 05:44:25.451561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 [2024-11-27 05:44:25.452362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.881 starting I/O failed: -6 00:22:37.881 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 [2024-11-27 05:44:25.453405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 Write completed with error (sct=0, sc=8) 00:22:37.882 starting I/O failed: -6 00:22:37.882 [2024-11-27 05:44:25.455221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:37.882 NVMe io qpair process completion error 00:22:37.882 Initializing NVMe Controllers 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:37.882 Controller IO queue size 128, less than required. 00:22:37.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:37.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:37.883 Initialization complete. Launching workers. 00:22:37.883 ======================================================== 00:22:37.883 Latency(us) 00:22:37.883 Device Information : IOPS MiB/s Average min max 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2205.95 94.79 58029.35 908.20 110944.97 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2191.50 94.17 58267.60 746.48 110336.90 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2197.57 94.43 58244.31 950.69 110198.26 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2208.04 94.88 58029.15 700.32 116931.73 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2181.65 93.74 58770.36 793.97 121238.21 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2188.14 94.02 57939.25 718.17 103835.49 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2185.42 93.90 58021.26 908.18 102224.98 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2169.50 93.22 58458.47 950.32 102222.77 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2209.72 94.95 57408.51 739.78 100772.27 00:22:37.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2247.21 96.56 56487.48 746.44 103270.08 00:22:37.883 ======================================================== 00:22:37.883 Total : 21984.70 944.66 57960.67 700.32 121238.21 00:22:37.883 00:22:37.883 [2024-11-27 05:44:25.458182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ab410 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac900 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aba70 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aaef0 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ab740 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ac720 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aa560 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22acae0 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aa890 is same with the state(6) to be set 00:22:37.883 [2024-11-27 05:44:25.458448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aabc0 is same with the state(6) to be set 00:22:37.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:37.883 05:44:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1823963 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1823963 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1823963 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.822 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.822 rmmod nvme_tcp 00:22:38.822 rmmod nvme_fabrics 00:22:39.081 rmmod nvme_keyring 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1823677 ']' 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1823677 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1823677 ']' 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1823677 00:22:39.081 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1823677) - No such process 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1823677 is not found' 00:22:39.081 Process with pid 1823677 is not found 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.081 05:44:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.987 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.987 00:22:40.987 real 0m10.394s 00:22:40.987 user 0m27.404s 00:22:40.987 sys 0m5.319s 00:22:40.987 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.987 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:40.987 ************************************ 00:22:40.987 END TEST nvmf_shutdown_tc4 00:22:40.987 ************************************ 00:22:40.987 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:40.987 00:22:40.987 real 0m41.154s 00:22:40.987 user 1m41.238s 00:22:40.987 sys 0m14.108s 00:22:40.987 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.987 05:44:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:40.987 ************************************ 00:22:40.987 END TEST nvmf_shutdown 00:22:40.987 ************************************ 00:22:41.247 05:44:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:41.247 05:44:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:41.247 05:44:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.247 05:44:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:41.247 ************************************ 00:22:41.247 START TEST nvmf_nsid 00:22:41.247 ************************************ 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:41.247 * Looking for test storage... 00:22:41.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.247 --rc genhtml_branch_coverage=1 00:22:41.247 --rc genhtml_function_coverage=1 00:22:41.247 --rc genhtml_legend=1 00:22:41.247 --rc geninfo_all_blocks=1 00:22:41.247 --rc geninfo_unexecuted_blocks=1 00:22:41.247 00:22:41.247 ' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.247 --rc genhtml_branch_coverage=1 00:22:41.247 --rc genhtml_function_coverage=1 00:22:41.247 --rc genhtml_legend=1 00:22:41.247 --rc geninfo_all_blocks=1 00:22:41.247 --rc geninfo_unexecuted_blocks=1 00:22:41.247 00:22:41.247 ' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.247 --rc genhtml_branch_coverage=1 00:22:41.247 --rc genhtml_function_coverage=1 00:22:41.247 --rc genhtml_legend=1 00:22:41.247 --rc geninfo_all_blocks=1 00:22:41.247 --rc geninfo_unexecuted_blocks=1 00:22:41.247 00:22:41.247 ' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.247 --rc genhtml_branch_coverage=1 00:22:41.247 --rc genhtml_function_coverage=1 00:22:41.247 --rc genhtml_legend=1 00:22:41.247 --rc geninfo_all_blocks=1 00:22:41.247 --rc geninfo_unexecuted_blocks=1 00:22:41.247 00:22:41.247 ' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.247 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:41.248 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.507 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.507 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.507 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:41.507 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:41.507 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.507 05:44:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.082 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.082 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.082 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.082 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.083 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.083 05:44:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:22:48.083 00:22:48.083 --- 10.0.0.2 ping statistics --- 00:22:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.083 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:22:48.083 00:22:48.083 --- 10.0.0.1 ping statistics --- 00:22:48.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.083 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1828556 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1828556 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1828556 ']' 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:48.083 [2024-11-27 05:44:35.264393] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:48.083 [2024-11-27 05:44:35.264435] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.083 [2024-11-27 05:44:35.341853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.083 [2024-11-27 05:44:35.381154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.083 [2024-11-27 05:44:35.381190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.083 [2024-11-27 05:44:35.381197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.083 [2024-11-27 05:44:35.381203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.083 [2024-11-27 05:44:35.381208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.083 [2024-11-27 05:44:35.381729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1828673 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=651c6419-533e-44f4-98f3-83ab7c074bd4 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=086f59c9-1c59-475f-bf9c-2781e307408a 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=42cd3bc8-b2be-4266-9db2-70208f4f64ae 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.083 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:48.083 null0 00:22:48.083 null1 00:22:48.083 [2024-11-27 05:44:35.572007] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:22:48.083 [2024-11-27 05:44:35.572049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828673 ] 00:22:48.083 null2 00:22:48.083 [2024-11-27 05:44:35.578168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.084 [2024-11-27 05:44:35.602376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1828673 /var/tmp/tgt2.sock 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1828673 ']' 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:48.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:48.084 [2024-11-27 05:44:35.645456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.084 [2024-11-27 05:44:35.686503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:48.084 05:44:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:48.342 [2024-11-27 05:44:36.215435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.342 [2024-11-27 05:44:36.231545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:48.342 nvme0n1 nvme0n2 00:22:48.342 nvme1n1 00:22:48.342 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:48.342 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:48.343 05:44:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:49.720 05:44:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 651c6419-533e-44f4-98f3-83ab7c074bd4 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=651c6419533e44f498f383ab7c074bd4 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 651C6419533E44F498F383AB7C074BD4 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 651C6419533E44F498F383AB7C074BD4 == \6\5\1\C\6\4\1\9\5\3\3\E\4\4\F\4\9\8\F\3\8\3\A\B\7\C\0\7\4\B\D\4 ]] 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 086f59c9-1c59-475f-bf9c-2781e307408a 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=086f59c91c59475fbf9c2781e307408a 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 086F59C91C59475FBF9C2781E307408A 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 086F59C91C59475FBF9C2781E307408A == \0\8\6\F\5\9\C\9\1\C\5\9\4\7\5\F\B\F\9\C\2\7\8\1\E\3\0\7\4\0\8\A ]] 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 42cd3bc8-b2be-4266-9db2-70208f4f64ae 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=42cd3bc8b2be42669db270208f4f64ae 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 42CD3BC8B2BE42669DB270208F4F64AE 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 42CD3BC8B2BE42669DB270208F4F64AE == \4\2\C\D\3\B\C\8\B\2\B\E\4\2\6\6\9\D\B\2\7\0\2\0\8\F\4\F\6\4\A\E ]] 00:22:50.658 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1828673 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1828673 ']' 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1828673 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1828673 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1828673' 00:22:50.917 killing process with pid 1828673 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1828673 00:22:50.917 05:44:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1828673 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.176 rmmod nvme_tcp 00:22:51.176 rmmod nvme_fabrics 00:22:51.176 rmmod nvme_keyring 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1828556 ']' 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1828556 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1828556 ']' 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1828556 00:22:51.176 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1828556 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1828556' 00:22:51.435 killing process with pid 1828556 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1828556 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1828556 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.435 05:44:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.973 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:53.973 00:22:53.973 real 0m12.424s 00:22:53.973 user 0m9.657s 00:22:53.973 sys 0m5.519s 00:22:53.973 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.973 05:44:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:53.973 ************************************ 00:22:53.973 END TEST nvmf_nsid 00:22:53.973 ************************************ 00:22:53.973 05:44:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:53.973 00:22:53.973 real 11m59.218s 00:22:53.973 user 25m52.002s 00:22:53.973 sys 3m39.029s 00:22:53.973 05:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.973 05:44:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:53.973 ************************************ 00:22:53.973 END TEST nvmf_target_extra 00:22:53.973 ************************************ 00:22:53.973 05:44:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:53.973 05:44:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.973 05:44:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.973 05:44:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:53.973 ************************************ 00:22:53.973 START TEST nvmf_host 00:22:53.973 ************************************ 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:53.973 * Looking for test storage... 00:22:53.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:53.973 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:53.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.973 --rc genhtml_branch_coverage=1 00:22:53.973 --rc genhtml_function_coverage=1 00:22:53.974 --rc genhtml_legend=1 00:22:53.974 --rc geninfo_all_blocks=1 00:22:53.974 --rc geninfo_unexecuted_blocks=1 00:22:53.974 00:22:53.974 ' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:53.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.974 --rc genhtml_branch_coverage=1 00:22:53.974 --rc genhtml_function_coverage=1 00:22:53.974 --rc genhtml_legend=1 00:22:53.974 --rc geninfo_all_blocks=1 00:22:53.974 --rc geninfo_unexecuted_blocks=1 00:22:53.974 00:22:53.974 ' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:53.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.974 --rc genhtml_branch_coverage=1 00:22:53.974 --rc genhtml_function_coverage=1 00:22:53.974 --rc genhtml_legend=1 00:22:53.974 --rc geninfo_all_blocks=1 00:22:53.974 --rc geninfo_unexecuted_blocks=1 00:22:53.974 00:22:53.974 ' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:53.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:53.974 --rc genhtml_branch_coverage=1 00:22:53.974 --rc genhtml_function_coverage=1 00:22:53.974 --rc genhtml_legend=1 00:22:53.974 --rc geninfo_all_blocks=1 00:22:53.974 --rc geninfo_unexecuted_blocks=1 00:22:53.974 00:22:53.974 ' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:53.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.974 ************************************ 00:22:53.974 START TEST nvmf_multicontroller 00:22:53.974 ************************************ 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:53.974 * Looking for test storage... 00:22:53.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:22:53.974 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:53.975 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:54.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.235 --rc genhtml_branch_coverage=1 00:22:54.235 --rc genhtml_function_coverage=1 00:22:54.235 --rc genhtml_legend=1 00:22:54.235 --rc geninfo_all_blocks=1 00:22:54.235 --rc geninfo_unexecuted_blocks=1 00:22:54.235 00:22:54.235 ' 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:54.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.235 --rc genhtml_branch_coverage=1 00:22:54.235 --rc genhtml_function_coverage=1 00:22:54.235 --rc genhtml_legend=1 00:22:54.235 --rc geninfo_all_blocks=1 00:22:54.235 --rc geninfo_unexecuted_blocks=1 00:22:54.235 00:22:54.235 ' 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:54.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.235 --rc genhtml_branch_coverage=1 00:22:54.235 --rc genhtml_function_coverage=1 00:22:54.235 --rc genhtml_legend=1 00:22:54.235 --rc geninfo_all_blocks=1 00:22:54.235 --rc geninfo_unexecuted_blocks=1 00:22:54.235 00:22:54.235 ' 00:22:54.235 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:54.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.235 --rc genhtml_branch_coverage=1 00:22:54.235 --rc genhtml_function_coverage=1 00:22:54.235 --rc genhtml_legend=1 00:22:54.235 --rc geninfo_all_blocks=1 00:22:54.236 --rc geninfo_unexecuted_blocks=1 00:22:54.236 00:22:54.236 ' 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.236 05:44:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.236 05:44:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.830 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:00.831 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:00.831 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:00.831 Found net devices under 0000:86:00.0: cvl_0_0 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:00.831 Found net devices under 0000:86:00.1: cvl_0_1 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:23:00.831 00:23:00.831 --- 10.0.0.2 ping statistics --- 00:23:00.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.831 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:23:00.831 00:23:00.831 --- 10.0.0.1 ping statistics --- 00:23:00.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.831 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1832790 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1832790 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1832790 ']' 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.831 05:44:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.831 [2024-11-27 05:44:48.000753] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:00.832 [2024-11-27 05:44:48.000805] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.832 [2024-11-27 05:44:48.081959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.832 [2024-11-27 05:44:48.124059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.832 [2024-11-27 05:44:48.124096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.832 [2024-11-27 05:44:48.124103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.832 [2024-11-27 05:44:48.124109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.832 [2024-11-27 05:44:48.124114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.832 [2024-11-27 05:44:48.125550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.832 [2024-11-27 05:44:48.125658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.832 [2024-11-27 05:44:48.125658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 [2024-11-27 05:44:48.262102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 Malloc0 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 [2024-11-27 05:44:48.317159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 [2024-11-27 05:44:48.325076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 Malloc1 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1833006 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1833006 /var/tmp/bdevperf.sock 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1833006 ']' 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 NVMe0n1 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.832 1 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.832 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.832 request: 00:23:00.832 { 00:23:00.832 "name": "NVMe0", 00:23:00.832 "trtype": "tcp", 00:23:00.832 "traddr": "10.0.0.2", 00:23:00.832 "adrfam": "ipv4", 00:23:00.832 "trsvcid": "4420", 00:23:00.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.833 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:00.833 "hostaddr": "10.0.0.1", 00:23:00.833 "prchk_reftag": false, 00:23:00.833 "prchk_guard": false, 00:23:00.833 "hdgst": false, 00:23:00.833 "ddgst": false, 00:23:00.833 "allow_unrecognized_csi": false, 00:23:00.833 "method": "bdev_nvme_attach_controller", 00:23:00.833 "req_id": 1 00:23:00.833 } 00:23:00.833 Got JSON-RPC error response 00:23:00.833 response: 00:23:00.833 { 00:23:00.833 "code": -114, 00:23:00.833 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.833 } 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.833 request: 00:23:00.833 { 00:23:00.833 "name": "NVMe0", 00:23:00.833 "trtype": "tcp", 00:23:00.833 "traddr": "10.0.0.2", 00:23:00.833 "adrfam": "ipv4", 00:23:00.833 "trsvcid": "4420", 00:23:00.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.833 "hostaddr": "10.0.0.1", 00:23:00.833 "prchk_reftag": false, 00:23:00.833 "prchk_guard": false, 00:23:00.833 "hdgst": false, 00:23:00.833 "ddgst": false, 00:23:00.833 "allow_unrecognized_csi": false, 00:23:00.833 "method": "bdev_nvme_attach_controller", 00:23:00.833 "req_id": 1 00:23:00.833 } 00:23:00.833 Got JSON-RPC error response 00:23:00.833 response: 00:23:00.833 { 00:23:00.833 "code": -114, 00:23:00.833 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.833 } 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.833 request: 00:23:00.833 { 00:23:00.833 "name": "NVMe0", 00:23:00.833 "trtype": "tcp", 00:23:00.833 "traddr": "10.0.0.2", 00:23:00.833 "adrfam": "ipv4", 00:23:00.833 "trsvcid": "4420", 00:23:00.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.833 "hostaddr": "10.0.0.1", 00:23:00.833 "prchk_reftag": false, 00:23:00.833 "prchk_guard": false, 00:23:00.833 "hdgst": false, 00:23:00.833 "ddgst": false, 00:23:00.833 "multipath": "disable", 00:23:00.833 "allow_unrecognized_csi": false, 00:23:00.833 "method": "bdev_nvme_attach_controller", 00:23:00.833 "req_id": 1 00:23:00.833 } 00:23:00.833 Got JSON-RPC error response 00:23:00.833 response: 00:23:00.833 { 00:23:00.833 "code": -114, 00:23:00.833 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:00.833 } 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:00.833 request: 00:23:00.833 { 00:23:00.833 "name": "NVMe0", 00:23:00.833 "trtype": "tcp", 00:23:00.833 "traddr": "10.0.0.2", 00:23:00.833 "adrfam": "ipv4", 00:23:00.833 "trsvcid": "4420", 00:23:00.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.833 "hostaddr": "10.0.0.1", 00:23:00.833 "prchk_reftag": false, 00:23:00.833 "prchk_guard": false, 00:23:00.833 "hdgst": false, 00:23:00.833 "ddgst": false, 00:23:00.833 "multipath": "failover", 00:23:00.833 "allow_unrecognized_csi": false, 00:23:00.833 "method": "bdev_nvme_attach_controller", 00:23:00.833 "req_id": 1 00:23:00.833 } 00:23:00.833 Got JSON-RPC error response 00:23:00.833 response: 00:23:00.833 { 00:23:00.833 "code": -114, 00:23:00.833 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:00.833 } 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.833 05:44:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.103 NVMe0n1 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.103 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.431 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:01.431 05:44:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.474 { 00:23:02.474 "results": [ 00:23:02.474 { 00:23:02.474 "job": "NVMe0n1", 00:23:02.474 "core_mask": "0x1", 00:23:02.474 "workload": "write", 00:23:02.474 "status": "finished", 00:23:02.474 "queue_depth": 128, 00:23:02.474 "io_size": 4096, 00:23:02.474 "runtime": 1.00248, 00:23:02.474 "iops": 24608.96975500758, 00:23:02.474 "mibps": 96.12878810549836, 00:23:02.474 "io_failed": 0, 00:23:02.474 "io_timeout": 0, 00:23:02.474 "avg_latency_us": 5194.87124627946, 00:23:02.474 "min_latency_us": 3042.7428571428572, 00:23:02.474 "max_latency_us": 15166.902857142857 00:23:02.474 } 00:23:02.474 ], 00:23:02.474 "core_count": 1 00:23:02.474 } 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1833006 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1833006 ']' 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1833006 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833006 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833006' 00:23:02.474 killing process with pid 1833006 00:23:02.474 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1833006 00:23:02.475 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1833006 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:23:02.748 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:02.748 [2024-11-27 05:44:48.428157] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:02.748 [2024-11-27 05:44:48.428206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833006 ] 00:23:02.748 [2024-11-27 05:44:48.500801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.748 [2024-11-27 05:44:48.543020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.748 [2024-11-27 05:44:49.233114] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name b546c375-c1f5-4cd6-91fb-e6101010b1b7 already exists 00:23:02.748 [2024-11-27 05:44:49.233143] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:b546c375-c1f5-4cd6-91fb-e6101010b1b7 alias for bdev NVMe1n1 00:23:02.748 [2024-11-27 05:44:49.233151] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:02.748 Running I/O for 1 seconds... 00:23:02.748 24542.00 IOPS, 95.87 MiB/s 00:23:02.748 Latency(us) 00:23:02.748 [2024-11-27T04:44:50.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.748 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:02.748 NVMe0n1 : 1.00 24608.97 96.13 0.00 0.00 5194.87 3042.74 15166.90 00:23:02.748 [2024-11-27T04:44:50.752Z] =================================================================================================================== 00:23:02.748 [2024-11-27T04:44:50.752Z] Total : 24608.97 96.13 0.00 0.00 5194.87 3042.74 15166.90 00:23:02.748 Received shutdown signal, test time was about 1.000000 seconds 00:23:02.748 00:23:02.748 Latency(us) 00:23:02.748 [2024-11-27T04:44:50.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.748 [2024-11-27T04:44:50.752Z] =================================================================================================================== 00:23:02.748 [2024-11-27T04:44:50.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.748 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.748 rmmod nvme_tcp 00:23:02.748 rmmod nvme_fabrics 00:23:02.748 rmmod nvme_keyring 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1832790 ']' 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1832790 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1832790 ']' 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1832790 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1832790 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.748 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.749 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1832790' 00:23:02.749 killing process with pid 1832790 00:23:02.749 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1832790 00:23:02.749 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1832790 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.008 05:44:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.548 00:23:05.548 real 0m11.203s 00:23:05.548 user 0m12.448s 00:23:05.548 sys 0m5.212s 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:05.548 ************************************ 00:23:05.548 END TEST nvmf_multicontroller 00:23:05.548 ************************************ 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.548 ************************************ 00:23:05.548 START TEST nvmf_aer 00:23:05.548 ************************************ 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:05.548 * Looking for test storage... 00:23:05.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:05.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.548 --rc genhtml_branch_coverage=1 00:23:05.548 --rc genhtml_function_coverage=1 00:23:05.548 --rc genhtml_legend=1 00:23:05.548 --rc geninfo_all_blocks=1 00:23:05.548 --rc geninfo_unexecuted_blocks=1 00:23:05.548 00:23:05.548 ' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:05.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.548 --rc genhtml_branch_coverage=1 00:23:05.548 --rc genhtml_function_coverage=1 00:23:05.548 --rc genhtml_legend=1 00:23:05.548 --rc geninfo_all_blocks=1 00:23:05.548 --rc geninfo_unexecuted_blocks=1 00:23:05.548 00:23:05.548 ' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:05.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.548 --rc genhtml_branch_coverage=1 00:23:05.548 --rc genhtml_function_coverage=1 00:23:05.548 --rc genhtml_legend=1 00:23:05.548 --rc geninfo_all_blocks=1 00:23:05.548 --rc geninfo_unexecuted_blocks=1 00:23:05.548 00:23:05.548 ' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:05.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.548 --rc genhtml_branch_coverage=1 00:23:05.548 --rc genhtml_function_coverage=1 00:23:05.548 --rc genhtml_legend=1 00:23:05.548 --rc geninfo_all_blocks=1 00:23:05.548 --rc geninfo_unexecuted_blocks=1 00:23:05.548 00:23:05.548 ' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.548 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.549 05:44:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:12.125 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:12.125 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:12.125 Found net devices under 0000:86:00.0: cvl_0_0 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:12.125 Found net devices under 0000:86:00.1: cvl_0_1 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:12.125 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:12.126 05:44:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:12.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:23:12.126 00:23:12.126 --- 10.0.0.2 ping statistics --- 00:23:12.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.126 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:12.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:23:12.126 00:23:12.126 --- 10.0.0.1 ping statistics --- 00:23:12.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.126 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1836799 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1836799 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1836799 ']' 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 [2024-11-27 05:44:59.329165] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:12.126 [2024-11-27 05:44:59.329217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.126 [2024-11-27 05:44:59.406400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.126 [2024-11-27 05:44:59.450537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.126 [2024-11-27 05:44:59.450573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.126 [2024-11-27 05:44:59.450580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.126 [2024-11-27 05:44:59.450586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.126 [2024-11-27 05:44:59.450591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.126 [2024-11-27 05:44:59.452199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.126 [2024-11-27 05:44:59.452305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.126 [2024-11-27 05:44:59.452415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.126 [2024-11-27 05:44:59.452416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 [2024-11-27 05:44:59.590956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 Malloc0 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 [2024-11-27 05:44:59.651347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.126 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.126 [ 00:23:12.126 { 00:23:12.126 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.126 "subtype": "Discovery", 00:23:12.126 "listen_addresses": [], 00:23:12.126 "allow_any_host": true, 00:23:12.126 "hosts": [] 00:23:12.126 }, 00:23:12.126 { 00:23:12.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.126 "subtype": "NVMe", 00:23:12.126 "listen_addresses": [ 00:23:12.126 { 00:23:12.126 "trtype": "TCP", 00:23:12.126 "adrfam": "IPv4", 00:23:12.126 "traddr": "10.0.0.2", 00:23:12.126 "trsvcid": "4420" 00:23:12.126 } 00:23:12.126 ], 00:23:12.126 "allow_any_host": true, 00:23:12.126 "hosts": [], 00:23:12.126 "serial_number": "SPDK00000000000001", 00:23:12.127 "model_number": "SPDK bdev Controller", 00:23:12.127 "max_namespaces": 2, 00:23:12.127 "min_cntlid": 1, 00:23:12.127 "max_cntlid": 65519, 00:23:12.127 "namespaces": [ 00:23:12.127 { 00:23:12.127 "nsid": 1, 00:23:12.127 "bdev_name": "Malloc0", 00:23:12.127 "name": "Malloc0", 00:23:12.127 "nguid": "9BF38DD91C774698B8A5F2ADB86E4D8D", 00:23:12.127 "uuid": "9bf38dd9-1c77-4698-b8a5-f2adb86e4d8d" 00:23:12.127 } 00:23:12.127 ] 00:23:12.127 } 00:23:12.127 ] 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1836970 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 Malloc1 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 Asynchronous Event Request test 00:23:12.127 Attaching to 10.0.0.2 00:23:12.127 Attached to 10.0.0.2 00:23:12.127 Registering asynchronous event callbacks... 00:23:12.127 Starting namespace attribute notice tests for all controllers... 00:23:12.127 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:12.127 aer_cb - Changed Namespace 00:23:12.127 Cleaning up... 00:23:12.127 [ 00:23:12.127 { 00:23:12.127 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:12.127 "subtype": "Discovery", 00:23:12.127 "listen_addresses": [], 00:23:12.127 "allow_any_host": true, 00:23:12.127 "hosts": [] 00:23:12.127 }, 00:23:12.127 { 00:23:12.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.127 "subtype": "NVMe", 00:23:12.127 "listen_addresses": [ 00:23:12.127 { 00:23:12.127 "trtype": "TCP", 00:23:12.127 "adrfam": "IPv4", 00:23:12.127 "traddr": "10.0.0.2", 00:23:12.127 "trsvcid": "4420" 00:23:12.127 } 00:23:12.127 ], 00:23:12.127 "allow_any_host": true, 00:23:12.127 "hosts": [], 00:23:12.127 "serial_number": "SPDK00000000000001", 00:23:12.127 "model_number": "SPDK bdev Controller", 00:23:12.127 "max_namespaces": 2, 00:23:12.127 "min_cntlid": 1, 00:23:12.127 "max_cntlid": 65519, 00:23:12.127 "namespaces": [ 00:23:12.127 { 00:23:12.127 "nsid": 1, 00:23:12.127 "bdev_name": "Malloc0", 00:23:12.127 "name": "Malloc0", 00:23:12.127 "nguid": "9BF38DD91C774698B8A5F2ADB86E4D8D", 00:23:12.127 "uuid": "9bf38dd9-1c77-4698-b8a5-f2adb86e4d8d" 00:23:12.127 }, 00:23:12.127 { 00:23:12.127 "nsid": 2, 00:23:12.127 "bdev_name": "Malloc1", 00:23:12.127 "name": "Malloc1", 00:23:12.127 "nguid": "C4474113D018496D819A3B4AA35CD104", 00:23:12.127 "uuid": "c4474113-d018-496d-819a-3b4aa35cd104" 00:23:12.127 } 00:23:12.127 ] 00:23:12.127 } 00:23:12.127 ] 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1836970 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.127 05:44:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.127 rmmod nvme_tcp 00:23:12.127 rmmod nvme_fabrics 00:23:12.127 rmmod nvme_keyring 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1836799 ']' 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1836799 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1836799 ']' 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1836799 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.127 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1836799 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1836799' 00:23:12.386 killing process with pid 1836799 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1836799 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1836799 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.386 05:45:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.924 00:23:14.924 real 0m9.264s 00:23:14.924 user 0m5.117s 00:23:14.924 sys 0m4.874s 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:14.924 ************************************ 00:23:14.924 END TEST nvmf_aer 00:23:14.924 ************************************ 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.924 ************************************ 00:23:14.924 START TEST nvmf_async_init 00:23:14.924 ************************************ 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:14.924 * Looking for test storage... 00:23:14.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:14.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.924 --rc genhtml_branch_coverage=1 00:23:14.924 --rc genhtml_function_coverage=1 00:23:14.924 --rc genhtml_legend=1 00:23:14.924 --rc geninfo_all_blocks=1 00:23:14.924 --rc geninfo_unexecuted_blocks=1 00:23:14.924 00:23:14.924 ' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:14.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.924 --rc genhtml_branch_coverage=1 00:23:14.924 --rc genhtml_function_coverage=1 00:23:14.924 --rc genhtml_legend=1 00:23:14.924 --rc geninfo_all_blocks=1 00:23:14.924 --rc geninfo_unexecuted_blocks=1 00:23:14.924 00:23:14.924 ' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:14.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.924 --rc genhtml_branch_coverage=1 00:23:14.924 --rc genhtml_function_coverage=1 00:23:14.924 --rc genhtml_legend=1 00:23:14.924 --rc geninfo_all_blocks=1 00:23:14.924 --rc geninfo_unexecuted_blocks=1 00:23:14.924 00:23:14.924 ' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:14.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.924 --rc genhtml_branch_coverage=1 00:23:14.924 --rc genhtml_function_coverage=1 00:23:14.924 --rc genhtml_legend=1 00:23:14.924 --rc geninfo_all_blocks=1 00:23:14.924 --rc geninfo_unexecuted_blocks=1 00:23:14.924 00:23:14.924 ' 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.924 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9f8e825b8c5e410f8d013fdd8b99d320 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.925 05:45:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:21.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:21.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:21.566 Found net devices under 0000:86:00.0: cvl_0_0 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:21.566 Found net devices under 0000:86:00.1: cvl_0_1 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:21.566 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:21.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:23:21.567 00:23:21.567 --- 10.0.0.2 ping statistics --- 00:23:21.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.567 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:23:21.567 00:23:21.567 --- 10.0.0.1 ping statistics --- 00:23:21.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.567 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1840849 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1840849 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1840849 ']' 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 [2024-11-27 05:45:08.682972] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:21.567 [2024-11-27 05:45:08.683028] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.567 [2024-11-27 05:45:08.762577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.567 [2024-11-27 05:45:08.802652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.567 [2024-11-27 05:45:08.802694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.567 [2024-11-27 05:45:08.802702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.567 [2024-11-27 05:45:08.802708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.567 [2024-11-27 05:45:08.802712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.567 [2024-11-27 05:45:08.803273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 [2024-11-27 05:45:08.948153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 null0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9f8e825b8c5e410f8d013fdd8b99d320 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 [2024-11-27 05:45:08.988384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 nvme0n1 00:23:21.567 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.567 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:21.567 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.567 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.567 [ 00:23:21.567 { 00:23:21.568 "name": "nvme0n1", 00:23:21.568 "aliases": [ 00:23:21.568 "9f8e825b-8c5e-410f-8d01-3fdd8b99d320" 00:23:21.568 ], 00:23:21.568 "product_name": "NVMe disk", 00:23:21.568 "block_size": 512, 00:23:21.568 "num_blocks": 2097152, 00:23:21.568 "uuid": "9f8e825b-8c5e-410f-8d01-3fdd8b99d320", 00:23:21.568 "numa_id": 1, 00:23:21.568 "assigned_rate_limits": { 00:23:21.568 "rw_ios_per_sec": 0, 00:23:21.568 "rw_mbytes_per_sec": 0, 00:23:21.568 "r_mbytes_per_sec": 0, 00:23:21.568 "w_mbytes_per_sec": 0 00:23:21.568 }, 00:23:21.568 "claimed": false, 00:23:21.568 "zoned": false, 00:23:21.568 "supported_io_types": { 00:23:21.568 "read": true, 00:23:21.568 "write": true, 00:23:21.568 "unmap": false, 00:23:21.568 "flush": true, 00:23:21.568 "reset": true, 00:23:21.568 "nvme_admin": true, 00:23:21.568 "nvme_io": true, 00:23:21.568 "nvme_io_md": false, 00:23:21.568 "write_zeroes": true, 00:23:21.568 "zcopy": false, 00:23:21.568 "get_zone_info": false, 00:23:21.568 "zone_management": false, 00:23:21.568 "zone_append": false, 00:23:21.568 "compare": true, 00:23:21.568 "compare_and_write": true, 00:23:21.568 "abort": true, 00:23:21.568 "seek_hole": false, 00:23:21.568 "seek_data": false, 00:23:21.568 "copy": true, 00:23:21.568 "nvme_iov_md": false 00:23:21.568 }, 00:23:21.568 "memory_domains": [ 00:23:21.568 { 00:23:21.568 "dma_device_id": "system", 00:23:21.568 "dma_device_type": 1 00:23:21.568 } 00:23:21.568 ], 00:23:21.568 "driver_specific": { 00:23:21.568 "nvme": [ 00:23:21.568 { 00:23:21.568 "trid": { 00:23:21.568 "trtype": "TCP", 00:23:21.568 "adrfam": "IPv4", 00:23:21.568 "traddr": "10.0.0.2", 00:23:21.568 "trsvcid": "4420", 00:23:21.568 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:21.568 }, 00:23:21.568 "ctrlr_data": { 00:23:21.568 "cntlid": 1, 00:23:21.568 "vendor_id": "0x8086", 00:23:21.568 "model_number": "SPDK bdev Controller", 00:23:21.568 "serial_number": "00000000000000000000", 00:23:21.568 "firmware_revision": "25.01", 00:23:21.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.568 "oacs": { 00:23:21.568 "security": 0, 00:23:21.568 "format": 0, 00:23:21.568 "firmware": 0, 00:23:21.568 "ns_manage": 0 00:23:21.568 }, 00:23:21.568 "multi_ctrlr": true, 00:23:21.568 "ana_reporting": false 00:23:21.568 }, 00:23:21.568 "vs": { 00:23:21.568 "nvme_version": "1.3" 00:23:21.568 }, 00:23:21.568 "ns_data": { 00:23:21.568 "id": 1, 00:23:21.568 "can_share": true 00:23:21.568 } 00:23:21.568 } 00:23:21.568 ], 00:23:21.568 "mp_policy": "active_passive" 00:23:21.568 } 00:23:21.568 } 00:23:21.568 ] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.568 [2024-11-27 05:45:09.236967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:21.568 [2024-11-27 05:45:09.237022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1209e20 (9): Bad file descriptor 00:23:21.568 [2024-11-27 05:45:09.368752] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.568 [ 00:23:21.568 { 00:23:21.568 "name": "nvme0n1", 00:23:21.568 "aliases": [ 00:23:21.568 "9f8e825b-8c5e-410f-8d01-3fdd8b99d320" 00:23:21.568 ], 00:23:21.568 "product_name": "NVMe disk", 00:23:21.568 "block_size": 512, 00:23:21.568 "num_blocks": 2097152, 00:23:21.568 "uuid": "9f8e825b-8c5e-410f-8d01-3fdd8b99d320", 00:23:21.568 "numa_id": 1, 00:23:21.568 "assigned_rate_limits": { 00:23:21.568 "rw_ios_per_sec": 0, 00:23:21.568 "rw_mbytes_per_sec": 0, 00:23:21.568 "r_mbytes_per_sec": 0, 00:23:21.568 "w_mbytes_per_sec": 0 00:23:21.568 }, 00:23:21.568 "claimed": false, 00:23:21.568 "zoned": false, 00:23:21.568 "supported_io_types": { 00:23:21.568 "read": true, 00:23:21.568 "write": true, 00:23:21.568 "unmap": false, 00:23:21.568 "flush": true, 00:23:21.568 "reset": true, 00:23:21.568 "nvme_admin": true, 00:23:21.568 "nvme_io": true, 00:23:21.568 "nvme_io_md": false, 00:23:21.568 "write_zeroes": true, 00:23:21.568 "zcopy": false, 00:23:21.568 "get_zone_info": false, 00:23:21.568 "zone_management": false, 00:23:21.568 "zone_append": false, 00:23:21.568 "compare": true, 00:23:21.568 "compare_and_write": true, 00:23:21.568 "abort": true, 00:23:21.568 "seek_hole": false, 00:23:21.568 "seek_data": false, 00:23:21.568 "copy": true, 00:23:21.568 "nvme_iov_md": false 00:23:21.568 }, 00:23:21.568 "memory_domains": [ 00:23:21.568 { 00:23:21.568 "dma_device_id": "system", 00:23:21.568 "dma_device_type": 1 00:23:21.568 } 00:23:21.568 ], 00:23:21.568 "driver_specific": { 00:23:21.568 "nvme": [ 00:23:21.568 { 00:23:21.568 "trid": { 00:23:21.568 "trtype": "TCP", 00:23:21.568 "adrfam": "IPv4", 00:23:21.568 "traddr": "10.0.0.2", 00:23:21.568 "trsvcid": "4420", 00:23:21.568 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:21.568 }, 00:23:21.568 "ctrlr_data": { 00:23:21.568 "cntlid": 2, 00:23:21.568 "vendor_id": "0x8086", 00:23:21.568 "model_number": "SPDK bdev Controller", 00:23:21.568 "serial_number": "00000000000000000000", 00:23:21.568 "firmware_revision": "25.01", 00:23:21.568 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.568 "oacs": { 00:23:21.568 "security": 0, 00:23:21.568 "format": 0, 00:23:21.568 "firmware": 0, 00:23:21.568 "ns_manage": 0 00:23:21.568 }, 00:23:21.568 "multi_ctrlr": true, 00:23:21.568 "ana_reporting": false 00:23:21.568 }, 00:23:21.568 "vs": { 00:23:21.568 "nvme_version": "1.3" 00:23:21.568 }, 00:23:21.568 "ns_data": { 00:23:21.568 "id": 1, 00:23:21.568 "can_share": true 00:23:21.568 } 00:23:21.568 } 00:23:21.568 ], 00:23:21.568 "mp_policy": "active_passive" 00:23:21.568 } 00:23:21.568 } 00:23:21.568 ] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jWNdVggZVN 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jWNdVggZVN 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.jWNdVggZVN 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.568 [2024-11-27 05:45:09.429546] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.568 [2024-11-27 05:45:09.429644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.568 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.569 [2024-11-27 05:45:09.445602] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.569 nvme0n1 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.569 [ 00:23:21.569 { 00:23:21.569 "name": "nvme0n1", 00:23:21.569 "aliases": [ 00:23:21.569 "9f8e825b-8c5e-410f-8d01-3fdd8b99d320" 00:23:21.569 ], 00:23:21.569 "product_name": "NVMe disk", 00:23:21.569 "block_size": 512, 00:23:21.569 "num_blocks": 2097152, 00:23:21.569 "uuid": "9f8e825b-8c5e-410f-8d01-3fdd8b99d320", 00:23:21.569 "numa_id": 1, 00:23:21.569 "assigned_rate_limits": { 00:23:21.569 "rw_ios_per_sec": 0, 00:23:21.569 "rw_mbytes_per_sec": 0, 00:23:21.569 "r_mbytes_per_sec": 0, 00:23:21.569 "w_mbytes_per_sec": 0 00:23:21.569 }, 00:23:21.569 "claimed": false, 00:23:21.569 "zoned": false, 00:23:21.569 "supported_io_types": { 00:23:21.569 "read": true, 00:23:21.569 "write": true, 00:23:21.569 "unmap": false, 00:23:21.569 "flush": true, 00:23:21.569 "reset": true, 00:23:21.569 "nvme_admin": true, 00:23:21.569 "nvme_io": true, 00:23:21.569 "nvme_io_md": false, 00:23:21.569 "write_zeroes": true, 00:23:21.569 "zcopy": false, 00:23:21.569 "get_zone_info": false, 00:23:21.569 "zone_management": false, 00:23:21.569 "zone_append": false, 00:23:21.569 "compare": true, 00:23:21.569 "compare_and_write": true, 00:23:21.569 "abort": true, 00:23:21.569 "seek_hole": false, 00:23:21.569 "seek_data": false, 00:23:21.569 "copy": true, 00:23:21.569 "nvme_iov_md": false 00:23:21.569 }, 00:23:21.569 "memory_domains": [ 00:23:21.569 { 00:23:21.569 "dma_device_id": "system", 00:23:21.569 "dma_device_type": 1 00:23:21.569 } 00:23:21.569 ], 00:23:21.569 "driver_specific": { 00:23:21.569 "nvme": [ 00:23:21.569 { 00:23:21.569 "trid": { 00:23:21.569 "trtype": "TCP", 00:23:21.569 "adrfam": "IPv4", 00:23:21.569 "traddr": "10.0.0.2", 00:23:21.569 "trsvcid": "4421", 00:23:21.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:21.569 }, 00:23:21.569 "ctrlr_data": { 00:23:21.569 "cntlid": 3, 00:23:21.569 "vendor_id": "0x8086", 00:23:21.569 "model_number": "SPDK bdev Controller", 00:23:21.569 "serial_number": "00000000000000000000", 00:23:21.569 "firmware_revision": "25.01", 00:23:21.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.569 "oacs": { 00:23:21.569 "security": 0, 00:23:21.569 "format": 0, 00:23:21.569 "firmware": 0, 00:23:21.569 "ns_manage": 0 00:23:21.569 }, 00:23:21.569 "multi_ctrlr": true, 00:23:21.569 "ana_reporting": false 00:23:21.569 }, 00:23:21.569 "vs": { 00:23:21.569 "nvme_version": "1.3" 00:23:21.569 }, 00:23:21.569 "ns_data": { 00:23:21.569 "id": 1, 00:23:21.569 "can_share": true 00:23:21.569 } 00:23:21.569 } 00:23:21.569 ], 00:23:21.569 "mp_policy": "active_passive" 00:23:21.569 } 00:23:21.569 } 00:23:21.569 ] 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.jWNdVggZVN 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.569 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.569 rmmod nvme_tcp 00:23:21.829 rmmod nvme_fabrics 00:23:21.829 rmmod nvme_keyring 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1840849 ']' 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1840849 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1840849 ']' 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1840849 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1840849 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1840849' 00:23:21.829 killing process with pid 1840849 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1840849 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1840849 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.829 05:45:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.366 00:23:24.366 real 0m9.440s 00:23:24.366 user 0m2.969s 00:23:24.366 sys 0m4.861s 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.366 ************************************ 00:23:24.366 END TEST nvmf_async_init 00:23:24.366 ************************************ 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.366 ************************************ 00:23:24.366 START TEST dma 00:23:24.366 ************************************ 00:23:24.366 05:45:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:24.366 * Looking for test storage... 00:23:24.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.366 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:24.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.367 --rc genhtml_branch_coverage=1 00:23:24.367 --rc genhtml_function_coverage=1 00:23:24.367 --rc genhtml_legend=1 00:23:24.367 --rc geninfo_all_blocks=1 00:23:24.367 --rc geninfo_unexecuted_blocks=1 00:23:24.367 00:23:24.367 ' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:24.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.367 --rc genhtml_branch_coverage=1 00:23:24.367 --rc genhtml_function_coverage=1 00:23:24.367 --rc genhtml_legend=1 00:23:24.367 --rc geninfo_all_blocks=1 00:23:24.367 --rc geninfo_unexecuted_blocks=1 00:23:24.367 00:23:24.367 ' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:24.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.367 --rc genhtml_branch_coverage=1 00:23:24.367 --rc genhtml_function_coverage=1 00:23:24.367 --rc genhtml_legend=1 00:23:24.367 --rc geninfo_all_blocks=1 00:23:24.367 --rc geninfo_unexecuted_blocks=1 00:23:24.367 00:23:24.367 ' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:24.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.367 --rc genhtml_branch_coverage=1 00:23:24.367 --rc genhtml_function_coverage=1 00:23:24.367 --rc genhtml_legend=1 00:23:24.367 --rc geninfo_all_blocks=1 00:23:24.367 --rc geninfo_unexecuted_blocks=1 00:23:24.367 00:23:24.367 ' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:24.367 00:23:24.367 real 0m0.210s 00:23:24.367 user 0m0.140s 00:23:24.367 sys 0m0.085s 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:24.367 ************************************ 00:23:24.367 END TEST dma 00:23:24.367 ************************************ 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.367 ************************************ 00:23:24.367 START TEST nvmf_identify 00:23:24.367 ************************************ 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:24.367 * Looking for test storage... 00:23:24.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:23:24.367 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.628 --rc genhtml_branch_coverage=1 00:23:24.628 --rc genhtml_function_coverage=1 00:23:24.628 --rc genhtml_legend=1 00:23:24.628 --rc geninfo_all_blocks=1 00:23:24.628 --rc geninfo_unexecuted_blocks=1 00:23:24.628 00:23:24.628 ' 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.628 --rc genhtml_branch_coverage=1 00:23:24.628 --rc genhtml_function_coverage=1 00:23:24.628 --rc genhtml_legend=1 00:23:24.628 --rc geninfo_all_blocks=1 00:23:24.628 --rc geninfo_unexecuted_blocks=1 00:23:24.628 00:23:24.628 ' 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.628 --rc genhtml_branch_coverage=1 00:23:24.628 --rc genhtml_function_coverage=1 00:23:24.628 --rc genhtml_legend=1 00:23:24.628 --rc geninfo_all_blocks=1 00:23:24.628 --rc geninfo_unexecuted_blocks=1 00:23:24.628 00:23:24.628 ' 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:24.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.628 --rc genhtml_branch_coverage=1 00:23:24.628 --rc genhtml_function_coverage=1 00:23:24.628 --rc genhtml_legend=1 00:23:24.628 --rc geninfo_all_blocks=1 00:23:24.628 --rc geninfo_unexecuted_blocks=1 00:23:24.628 00:23:24.628 ' 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.628 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.629 05:45:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:31.201 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:31.201 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:31.201 Found net devices under 0000:86:00.0: cvl_0_0 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:31.201 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:31.202 Found net devices under 0000:86:00.1: cvl_0_1 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:31.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:23:31.202 00:23:31.202 --- 10.0.0.2 ping statistics --- 00:23:31.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.202 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:23:31.202 00:23:31.202 --- 10.0.0.1 ping statistics --- 00:23:31.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.202 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1844810 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1844810 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1844810 ']' 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.202 05:45:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.202 [2024-11-27 05:45:18.464217] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:31.202 [2024-11-27 05:45:18.464269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.202 [2024-11-27 05:45:18.544091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.202 [2024-11-27 05:45:18.587461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.202 [2024-11-27 05:45:18.587499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.202 [2024-11-27 05:45:18.587506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.202 [2024-11-27 05:45:18.587512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.202 [2024-11-27 05:45:18.587519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.202 [2024-11-27 05:45:18.589057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.202 [2024-11-27 05:45:18.589166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.202 [2024-11-27 05:45:18.589277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.202 [2024-11-27 05:45:18.589278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 [2024-11-27 05:45:19.305753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 Malloc0 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 [2024-11-27 05:45:19.404439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 [ 00:23:31.462 { 00:23:31.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:31.462 "subtype": "Discovery", 00:23:31.462 "listen_addresses": [ 00:23:31.462 { 00:23:31.462 "trtype": "TCP", 00:23:31.462 "adrfam": "IPv4", 00:23:31.462 "traddr": "10.0.0.2", 00:23:31.462 "trsvcid": "4420" 00:23:31.462 } 00:23:31.462 ], 00:23:31.462 "allow_any_host": true, 00:23:31.462 "hosts": [] 00:23:31.462 }, 00:23:31.462 { 00:23:31.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.462 "subtype": "NVMe", 00:23:31.462 "listen_addresses": [ 00:23:31.462 { 00:23:31.462 "trtype": "TCP", 00:23:31.462 "adrfam": "IPv4", 00:23:31.462 "traddr": "10.0.0.2", 00:23:31.462 "trsvcid": "4420" 00:23:31.462 } 00:23:31.462 ], 00:23:31.462 "allow_any_host": true, 00:23:31.462 "hosts": [], 00:23:31.462 "serial_number": "SPDK00000000000001", 00:23:31.462 "model_number": "SPDK bdev Controller", 00:23:31.462 "max_namespaces": 32, 00:23:31.462 "min_cntlid": 1, 00:23:31.462 "max_cntlid": 65519, 00:23:31.462 "namespaces": [ 00:23:31.462 { 00:23:31.462 "nsid": 1, 00:23:31.462 "bdev_name": "Malloc0", 00:23:31.462 "name": "Malloc0", 00:23:31.462 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:31.462 "eui64": "ABCDEF0123456789", 00:23:31.462 "uuid": "7c7df39a-de95-4cfd-a29e-e4a9a8b083cf" 00:23:31.462 } 00:23:31.462 ] 00:23:31.462 } 00:23:31.462 ] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.462 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:31.462 [2024-11-27 05:45:19.456030] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:31.462 [2024-11-27 05:45:19.456070] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844935 ] 00:23:31.727 [2024-11-27 05:45:19.497177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:31.727 [2024-11-27 05:45:19.497226] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:31.727 [2024-11-27 05:45:19.497231] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:31.727 [2024-11-27 05:45:19.497246] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:31.727 [2024-11-27 05:45:19.497254] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:31.727 [2024-11-27 05:45:19.500974] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:31.727 [2024-11-27 05:45:19.501007] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb0d690 0 00:23:31.727 [2024-11-27 05:45:19.508681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:31.727 [2024-11-27 05:45:19.508695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:31.727 [2024-11-27 05:45:19.508700] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:31.727 [2024-11-27 05:45:19.508703] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:31.727 [2024-11-27 05:45:19.508734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.727 [2024-11-27 05:45:19.508740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.727 [2024-11-27 05:45:19.508744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.727 [2024-11-27 05:45:19.508756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:31.727 [2024-11-27 05:45:19.508774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.727 [2024-11-27 05:45:19.516677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.727 [2024-11-27 05:45:19.516685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.727 [2024-11-27 05:45:19.516689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.516703] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:31.728 [2024-11-27 05:45:19.516710] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:31.728 [2024-11-27 05:45:19.516718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:31.728 [2024-11-27 05:45:19.516731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.728 [2024-11-27 05:45:19.516745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.728 [2024-11-27 05:45:19.516757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.728 [2024-11-27 05:45:19.516832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.728 [2024-11-27 05:45:19.516838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.728 [2024-11-27 05:45:19.516841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.516852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:31.728 [2024-11-27 05:45:19.516858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:31.728 [2024-11-27 05:45:19.516865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.728 [2024-11-27 05:45:19.516877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.728 [2024-11-27 05:45:19.516887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.728 [2024-11-27 05:45:19.516950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.728 [2024-11-27 05:45:19.516956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.728 [2024-11-27 05:45:19.516959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.516967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:31.728 [2024-11-27 05:45:19.516974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:31.728 [2024-11-27 05:45:19.516980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.516986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.728 [2024-11-27 05:45:19.516991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.728 [2024-11-27 05:45:19.517001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.728 [2024-11-27 05:45:19.517067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.728 [2024-11-27 05:45:19.517072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.728 [2024-11-27 05:45:19.517076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.517084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:31.728 [2024-11-27 05:45:19.517092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.728 [2024-11-27 05:45:19.517107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.728 [2024-11-27 05:45:19.517116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.728 [2024-11-27 05:45:19.517175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.728 [2024-11-27 05:45:19.517181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.728 [2024-11-27 05:45:19.517184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.517191] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:31.728 [2024-11-27 05:45:19.517196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:31.728 [2024-11-27 05:45:19.517202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:31.728 [2024-11-27 05:45:19.517310] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:31.728 [2024-11-27 05:45:19.517314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:31.728 [2024-11-27 05:45:19.517322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.728 [2024-11-27 05:45:19.517333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.728 [2024-11-27 05:45:19.517343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.728 [2024-11-27 05:45:19.517406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.728 [2024-11-27 05:45:19.517411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.728 [2024-11-27 05:45:19.517414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.517422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:31.728 [2024-11-27 05:45:19.517430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.728 [2024-11-27 05:45:19.517442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.728 [2024-11-27 05:45:19.517451] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.728 [2024-11-27 05:45:19.517508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.728 [2024-11-27 05:45:19.517514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.728 [2024-11-27 05:45:19.517517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.517524] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:31.728 [2024-11-27 05:45:19.517532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:31.728 [2024-11-27 05:45:19.517538] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:31.728 [2024-11-27 05:45:19.517546] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:31.728 [2024-11-27 05:45:19.517554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.728 [2024-11-27 05:45:19.517563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.728 [2024-11-27 05:45:19.517572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.728 [2024-11-27 05:45:19.517651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.728 [2024-11-27 05:45:19.517657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.728 [2024-11-27 05:45:19.517660] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517663] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb0d690): datao=0, datal=4096, cccid=0 00:23:31.728 [2024-11-27 05:45:19.517667] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb6f100) on tqpair(0xb0d690): expected_datao=0, payload_size=4096 00:23:31.728 [2024-11-27 05:45:19.517678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517689] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517693] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517706] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.728 [2024-11-27 05:45:19.517712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.728 [2024-11-27 05:45:19.517715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.728 [2024-11-27 05:45:19.517718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.728 [2024-11-27 05:45:19.517725] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:31.728 [2024-11-27 05:45:19.517729] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:31.728 [2024-11-27 05:45:19.517733] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:31.728 [2024-11-27 05:45:19.517737] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:31.728 [2024-11-27 05:45:19.517741] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:31.729 [2024-11-27 05:45:19.517745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:31.729 [2024-11-27 05:45:19.517752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:31.729 [2024-11-27 05:45:19.517759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.517771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:31.729 [2024-11-27 05:45:19.517781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.729 [2024-11-27 05:45:19.517839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.729 [2024-11-27 05:45:19.517847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.729 [2024-11-27 05:45:19.517850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.729 [2024-11-27 05:45:19.517860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.517872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.729 [2024-11-27 05:45:19.517877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.517888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.729 [2024-11-27 05:45:19.517893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.517904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.729 [2024-11-27 05:45:19.517909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.517920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.729 [2024-11-27 05:45:19.517925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:31.729 [2024-11-27 05:45:19.517934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:31.729 [2024-11-27 05:45:19.517940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.517943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.517949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.729 [2024-11-27 05:45:19.517960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f100, cid 0, qid 0 00:23:31.729 [2024-11-27 05:45:19.517965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f280, cid 1, qid 0 00:23:31.729 [2024-11-27 05:45:19.517968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f400, cid 2, qid 0 00:23:31.729 [2024-11-27 05:45:19.517973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.729 [2024-11-27 05:45:19.517977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f700, cid 4, qid 0 00:23:31.729 [2024-11-27 05:45:19.518066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.729 [2024-11-27 05:45:19.518072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.729 [2024-11-27 05:45:19.518075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.518078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f700) on tqpair=0xb0d690 00:23:31.729 [2024-11-27 05:45:19.518082] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:31.729 [2024-11-27 05:45:19.518088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:31.729 [2024-11-27 05:45:19.518098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.518101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.518107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.729 [2024-11-27 05:45:19.518117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f700, cid 4, qid 0 00:23:31.729 [2024-11-27 05:45:19.518185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.729 [2024-11-27 05:45:19.518191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.729 [2024-11-27 05:45:19.518194] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.518197] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb0d690): datao=0, datal=4096, cccid=4 00:23:31.729 [2024-11-27 05:45:19.518200] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb6f700) on tqpair(0xb0d690): expected_datao=0, payload_size=4096 00:23:31.729 [2024-11-27 05:45:19.518204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.518216] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.518219] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.729 [2024-11-27 05:45:19.558729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.729 [2024-11-27 05:45:19.558732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f700) on tqpair=0xb0d690 00:23:31.729 [2024-11-27 05:45:19.558747] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:31.729 [2024-11-27 05:45:19.558770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.558781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.729 [2024-11-27 05:45:19.558787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.558799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.729 [2024-11-27 05:45:19.558813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f700, cid 4, qid 0 00:23:31.729 [2024-11-27 05:45:19.558818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f880, cid 5, qid 0 00:23:31.729 [2024-11-27 05:45:19.558910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.729 [2024-11-27 05:45:19.558916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.729 [2024-11-27 05:45:19.558919] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558923] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb0d690): datao=0, datal=1024, cccid=4 00:23:31.729 [2024-11-27 05:45:19.558926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb6f700) on tqpair(0xb0d690): expected_datao=0, payload_size=1024 00:23:31.729 [2024-11-27 05:45:19.558930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558936] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558941] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.729 [2024-11-27 05:45:19.558951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.729 [2024-11-27 05:45:19.558954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.558957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f880) on tqpair=0xb0d690 00:23:31.729 [2024-11-27 05:45:19.600756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.729 [2024-11-27 05:45:19.600768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.729 [2024-11-27 05:45:19.600771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.600775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f700) on tqpair=0xb0d690 00:23:31.729 [2024-11-27 05:45:19.600786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.600790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb0d690) 00:23:31.729 [2024-11-27 05:45:19.600796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.729 [2024-11-27 05:45:19.600813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f700, cid 4, qid 0 00:23:31.729 [2024-11-27 05:45:19.600914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.729 [2024-11-27 05:45:19.600920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.729 [2024-11-27 05:45:19.600923] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.600926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb0d690): datao=0, datal=3072, cccid=4 00:23:31.729 [2024-11-27 05:45:19.600930] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb6f700) on tqpair(0xb0d690): expected_datao=0, payload_size=3072 00:23:31.729 [2024-11-27 05:45:19.600934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.600940] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.600943] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.600957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.729 [2024-11-27 05:45:19.600963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.729 [2024-11-27 05:45:19.600966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.729 [2024-11-27 05:45:19.600969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f700) on tqpair=0xb0d690 00:23:31.729 [2024-11-27 05:45:19.600977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.730 [2024-11-27 05:45:19.600980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb0d690) 00:23:31.730 [2024-11-27 05:45:19.600986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.730 [2024-11-27 05:45:19.600999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f700, cid 4, qid 0 00:23:31.730 [2024-11-27 05:45:19.601068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.730 [2024-11-27 05:45:19.601075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.730 [2024-11-27 05:45:19.601078] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.730 [2024-11-27 05:45:19.601081] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb0d690): datao=0, datal=8, cccid=4 00:23:31.730 [2024-11-27 05:45:19.601084] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb6f700) on tqpair(0xb0d690): expected_datao=0, payload_size=8 00:23:31.730 [2024-11-27 05:45:19.601088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.730 [2024-11-27 05:45:19.601094] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.730 [2024-11-27 05:45:19.601097] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.730 [2024-11-27 05:45:19.645680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.730 [2024-11-27 05:45:19.645692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.730 [2024-11-27 05:45:19.645696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.730 [2024-11-27 05:45:19.645699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f700) on tqpair=0xb0d690 00:23:31.730 ===================================================== 00:23:31.730 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:31.730 ===================================================== 00:23:31.730 Controller Capabilities/Features 00:23:31.730 ================================ 00:23:31.730 Vendor ID: 0000 00:23:31.730 Subsystem Vendor ID: 0000 00:23:31.730 Serial Number: .................... 00:23:31.730 Model Number: ........................................ 00:23:31.730 Firmware Version: 25.01 00:23:31.730 Recommended Arb Burst: 0 00:23:31.730 IEEE OUI Identifier: 00 00 00 00:23:31.730 Multi-path I/O 00:23:31.730 May have multiple subsystem ports: No 00:23:31.730 May have multiple controllers: No 00:23:31.730 Associated with SR-IOV VF: No 00:23:31.730 Max Data Transfer Size: 131072 00:23:31.730 Max Number of Namespaces: 0 00:23:31.730 Max Number of I/O Queues: 1024 00:23:31.730 NVMe Specification Version (VS): 1.3 00:23:31.730 NVMe Specification Version (Identify): 1.3 00:23:31.730 Maximum Queue Entries: 128 00:23:31.730 Contiguous Queues Required: Yes 00:23:31.730 Arbitration Mechanisms Supported 00:23:31.730 Weighted Round Robin: Not Supported 00:23:31.730 Vendor Specific: Not Supported 00:23:31.730 Reset Timeout: 15000 ms 00:23:31.730 Doorbell Stride: 4 bytes 00:23:31.730 NVM Subsystem Reset: Not Supported 00:23:31.730 Command Sets Supported 00:23:31.730 NVM Command Set: Supported 00:23:31.730 Boot Partition: Not Supported 00:23:31.730 Memory Page Size Minimum: 4096 bytes 00:23:31.730 Memory Page Size Maximum: 4096 bytes 00:23:31.730 Persistent Memory Region: Not Supported 00:23:31.730 Optional Asynchronous Events Supported 00:23:31.730 Namespace Attribute Notices: Not Supported 00:23:31.730 Firmware Activation Notices: Not Supported 00:23:31.730 ANA Change Notices: Not Supported 00:23:31.730 PLE Aggregate Log Change Notices: Not Supported 00:23:31.730 LBA Status Info Alert Notices: Not Supported 00:23:31.730 EGE Aggregate Log Change Notices: Not Supported 00:23:31.730 Normal NVM Subsystem Shutdown event: Not Supported 00:23:31.730 Zone Descriptor Change Notices: Not Supported 00:23:31.730 Discovery Log Change Notices: Supported 00:23:31.730 Controller Attributes 00:23:31.730 128-bit Host Identifier: Not Supported 00:23:31.730 Non-Operational Permissive Mode: Not Supported 00:23:31.730 NVM Sets: Not Supported 00:23:31.730 Read Recovery Levels: Not Supported 00:23:31.730 Endurance Groups: Not Supported 00:23:31.730 Predictable Latency Mode: Not Supported 00:23:31.730 Traffic Based Keep ALive: Not Supported 00:23:31.730 Namespace Granularity: Not Supported 00:23:31.730 SQ Associations: Not Supported 00:23:31.730 UUID List: Not Supported 00:23:31.730 Multi-Domain Subsystem: Not Supported 00:23:31.730 Fixed Capacity Management: Not Supported 00:23:31.730 Variable Capacity Management: Not Supported 00:23:31.730 Delete Endurance Group: Not Supported 00:23:31.730 Delete NVM Set: Not Supported 00:23:31.730 Extended LBA Formats Supported: Not Supported 00:23:31.730 Flexible Data Placement Supported: Not Supported 00:23:31.730 00:23:31.730 Controller Memory Buffer Support 00:23:31.730 ================================ 00:23:31.730 Supported: No 00:23:31.730 00:23:31.730 Persistent Memory Region Support 00:23:31.730 ================================ 00:23:31.730 Supported: No 00:23:31.730 00:23:31.730 Admin Command Set Attributes 00:23:31.730 ============================ 00:23:31.730 Security Send/Receive: Not Supported 00:23:31.730 Format NVM: Not Supported 00:23:31.730 Firmware Activate/Download: Not Supported 00:23:31.730 Namespace Management: Not Supported 00:23:31.730 Device Self-Test: Not Supported 00:23:31.730 Directives: Not Supported 00:23:31.730 NVMe-MI: Not Supported 00:23:31.730 Virtualization Management: Not Supported 00:23:31.730 Doorbell Buffer Config: Not Supported 00:23:31.730 Get LBA Status Capability: Not Supported 00:23:31.730 Command & Feature Lockdown Capability: Not Supported 00:23:31.730 Abort Command Limit: 1 00:23:31.730 Async Event Request Limit: 4 00:23:31.730 Number of Firmware Slots: N/A 00:23:31.730 Firmware Slot 1 Read-Only: N/A 00:23:31.730 Firmware Activation Without Reset: N/A 00:23:31.730 Multiple Update Detection Support: N/A 00:23:31.730 Firmware Update Granularity: No Information Provided 00:23:31.730 Per-Namespace SMART Log: No 00:23:31.730 Asymmetric Namespace Access Log Page: Not Supported 00:23:31.730 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:31.730 Command Effects Log Page: Not Supported 00:23:31.730 Get Log Page Extended Data: Supported 00:23:31.730 Telemetry Log Pages: Not Supported 00:23:31.730 Persistent Event Log Pages: Not Supported 00:23:31.730 Supported Log Pages Log Page: May Support 00:23:31.730 Commands Supported & Effects Log Page: Not Supported 00:23:31.730 Feature Identifiers & Effects Log Page:May Support 00:23:31.730 NVMe-MI Commands & Effects Log Page: May Support 00:23:31.730 Data Area 4 for Telemetry Log: Not Supported 00:23:31.730 Error Log Page Entries Supported: 128 00:23:31.730 Keep Alive: Not Supported 00:23:31.730 00:23:31.730 NVM Command Set Attributes 00:23:31.730 ========================== 00:23:31.730 Submission Queue Entry Size 00:23:31.730 Max: 1 00:23:31.730 Min: 1 00:23:31.730 Completion Queue Entry Size 00:23:31.730 Max: 1 00:23:31.730 Min: 1 00:23:31.730 Number of Namespaces: 0 00:23:31.730 Compare Command: Not Supported 00:23:31.730 Write Uncorrectable Command: Not Supported 00:23:31.730 Dataset Management Command: Not Supported 00:23:31.730 Write Zeroes Command: Not Supported 00:23:31.730 Set Features Save Field: Not Supported 00:23:31.730 Reservations: Not Supported 00:23:31.730 Timestamp: Not Supported 00:23:31.730 Copy: Not Supported 00:23:31.730 Volatile Write Cache: Not Present 00:23:31.730 Atomic Write Unit (Normal): 1 00:23:31.730 Atomic Write Unit (PFail): 1 00:23:31.730 Atomic Compare & Write Unit: 1 00:23:31.730 Fused Compare & Write: Supported 00:23:31.730 Scatter-Gather List 00:23:31.730 SGL Command Set: Supported 00:23:31.730 SGL Keyed: Supported 00:23:31.730 SGL Bit Bucket Descriptor: Not Supported 00:23:31.730 SGL Metadata Pointer: Not Supported 00:23:31.730 Oversized SGL: Not Supported 00:23:31.730 SGL Metadata Address: Not Supported 00:23:31.730 SGL Offset: Supported 00:23:31.730 Transport SGL Data Block: Not Supported 00:23:31.730 Replay Protected Memory Block: Not Supported 00:23:31.730 00:23:31.730 Firmware Slot Information 00:23:31.730 ========================= 00:23:31.730 Active slot: 0 00:23:31.730 00:23:31.730 00:23:31.730 Error Log 00:23:31.730 ========= 00:23:31.730 00:23:31.730 Active Namespaces 00:23:31.730 ================= 00:23:31.730 Discovery Log Page 00:23:31.730 ================== 00:23:31.730 Generation Counter: 2 00:23:31.730 Number of Records: 2 00:23:31.730 Record Format: 0 00:23:31.730 00:23:31.730 Discovery Log Entry 0 00:23:31.730 ---------------------- 00:23:31.730 Transport Type: 3 (TCP) 00:23:31.730 Address Family: 1 (IPv4) 00:23:31.730 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:31.730 Entry Flags: 00:23:31.730 Duplicate Returned Information: 1 00:23:31.730 Explicit Persistent Connection Support for Discovery: 1 00:23:31.730 Transport Requirements: 00:23:31.730 Secure Channel: Not Required 00:23:31.730 Port ID: 0 (0x0000) 00:23:31.730 Controller ID: 65535 (0xffff) 00:23:31.730 Admin Max SQ Size: 128 00:23:31.730 Transport Service Identifier: 4420 00:23:31.730 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:31.731 Transport Address: 10.0.0.2 00:23:31.731 Discovery Log Entry 1 00:23:31.731 ---------------------- 00:23:31.731 Transport Type: 3 (TCP) 00:23:31.731 Address Family: 1 (IPv4) 00:23:31.731 Subsystem Type: 2 (NVM Subsystem) 00:23:31.731 Entry Flags: 00:23:31.731 Duplicate Returned Information: 0 00:23:31.731 Explicit Persistent Connection Support for Discovery: 0 00:23:31.731 Transport Requirements: 00:23:31.731 Secure Channel: Not Required 00:23:31.731 Port ID: 0 (0x0000) 00:23:31.731 Controller ID: 65535 (0xffff) 00:23:31.731 Admin Max SQ Size: 128 00:23:31.731 Transport Service Identifier: 4420 00:23:31.731 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:31.731 Transport Address: 10.0.0.2 [2024-11-27 05:45:19.645779] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:31.731 [2024-11-27 05:45:19.645789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f100) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.645795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.731 [2024-11-27 05:45:19.645800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f280) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.645804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.731 [2024-11-27 05:45:19.645808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f400) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.645812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.731 [2024-11-27 05:45:19.645817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.645820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.731 [2024-11-27 05:45:19.645828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.645832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.645835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.645841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.645855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.645913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.645919] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.645922] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.645925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.645931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.645934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.645937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.645943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.645955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646046] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:31.731 [2024-11-27 05:45:19.646050] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:31.731 [2024-11-27 05:45:19.646058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.646072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.646082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.646173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.646182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.646277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.646286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.646384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.646393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646462] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.646487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.646497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646561] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.646584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.646593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.731 [2024-11-27 05:45:19.646704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.731 [2024-11-27 05:45:19.646714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.731 [2024-11-27 05:45:19.646775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.731 [2024-11-27 05:45:19.646781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.731 [2024-11-27 05:45:19.646784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.731 [2024-11-27 05:45:19.646795] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.731 [2024-11-27 05:45:19.646799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.646802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.646807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.646816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.646882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.646887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.646890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.646893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.646902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.646905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.646908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.646914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.646925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.646985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.646990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.646993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.646996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.732 [2024-11-27 05:45:19.647761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.732 [2024-11-27 05:45:19.647770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.732 [2024-11-27 05:45:19.647828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.732 [2024-11-27 05:45:19.647834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.732 [2024-11-27 05:45:19.647837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.732 [2024-11-27 05:45:19.647840] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.732 [2024-11-27 05:45:19.647848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.647852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.647854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.647860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.647869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.647929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.647935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.647938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.647941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.647949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.647953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.647956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.647961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.647970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648047] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648288] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648559] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.648956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.648962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.648965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.733 [2024-11-27 05:45:19.648976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.733 [2024-11-27 05:45:19.648983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.733 [2024-11-27 05:45:19.648989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.733 [2024-11-27 05:45:19.648998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.733 [2024-11-27 05:45:19.649056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.733 [2024-11-27 05:45:19.649061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.733 [2024-11-27 05:45:19.649064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.649077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.734 [2024-11-27 05:45:19.649089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.734 [2024-11-27 05:45:19.649099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.734 [2024-11-27 05:45:19.649154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.734 [2024-11-27 05:45:19.649160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.734 [2024-11-27 05:45:19.649163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.649174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.734 [2024-11-27 05:45:19.649186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.734 [2024-11-27 05:45:19.649195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.734 [2024-11-27 05:45:19.649261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.734 [2024-11-27 05:45:19.649266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.734 [2024-11-27 05:45:19.649269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.649281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.734 [2024-11-27 05:45:19.649293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.734 [2024-11-27 05:45:19.649303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.734 [2024-11-27 05:45:19.649358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.734 [2024-11-27 05:45:19.649364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.734 [2024-11-27 05:45:19.649367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.649379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.734 [2024-11-27 05:45:19.649391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.734 [2024-11-27 05:45:19.649400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.734 [2024-11-27 05:45:19.649463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.734 [2024-11-27 05:45:19.649469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.734 [2024-11-27 05:45:19.649472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.649483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649492] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.734 [2024-11-27 05:45:19.649497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.734 [2024-11-27 05:45:19.649507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.734 [2024-11-27 05:45:19.649569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.734 [2024-11-27 05:45:19.649575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.734 [2024-11-27 05:45:19.649578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.649589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.649596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.734 [2024-11-27 05:45:19.649601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.734 [2024-11-27 05:45:19.649611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.734 [2024-11-27 05:45:19.653680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.734 [2024-11-27 05:45:19.653687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.734 [2024-11-27 05:45:19.653690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.653694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.653703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.653707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.653710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb0d690) 00:23:31.734 [2024-11-27 05:45:19.653716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.734 [2024-11-27 05:45:19.653727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb6f580, cid 3, qid 0 00:23:31.734 [2024-11-27 05:45:19.653796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.734 [2024-11-27 05:45:19.653801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.734 [2024-11-27 05:45:19.653805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.734 [2024-11-27 05:45:19.653808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb6f580) on tqpair=0xb0d690 00:23:31.734 [2024-11-27 05:45:19.653815] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:23:31.734 00:23:31.734 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:31.734 [2024-11-27 05:45:19.690468] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:31.734 [2024-11-27 05:45:19.690508] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1844975 ] 00:23:31.997 [2024-11-27 05:45:19.731857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:31.997 [2024-11-27 05:45:19.731903] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:31.997 [2024-11-27 05:45:19.731907] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:31.997 [2024-11-27 05:45:19.731919] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:31.997 [2024-11-27 05:45:19.731927] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:31.997 [2024-11-27 05:45:19.732273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:31.997 [2024-11-27 05:45:19.732302] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x915690 0 00:23:31.997 [2024-11-27 05:45:19.746684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:31.997 [2024-11-27 05:45:19.746698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:31.997 [2024-11-27 05:45:19.746702] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:31.997 [2024-11-27 05:45:19.746705] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:31.997 [2024-11-27 05:45:19.746731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.746736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.746740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.997 [2024-11-27 05:45:19.746749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:31.997 [2024-11-27 05:45:19.746765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.997 [2024-11-27 05:45:19.754682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.997 [2024-11-27 05:45:19.754690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.997 [2024-11-27 05:45:19.754693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.754696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.997 [2024-11-27 05:45:19.754707] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:31.997 [2024-11-27 05:45:19.754712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:31.997 [2024-11-27 05:45:19.754717] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:31.997 [2024-11-27 05:45:19.754728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.754732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.754735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.997 [2024-11-27 05:45:19.754742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.997 [2024-11-27 05:45:19.754755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.997 [2024-11-27 05:45:19.754882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.997 [2024-11-27 05:45:19.754888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.997 [2024-11-27 05:45:19.754891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.754895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.997 [2024-11-27 05:45:19.754901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:31.997 [2024-11-27 05:45:19.754907] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:31.997 [2024-11-27 05:45:19.754913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.754917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.754920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.997 [2024-11-27 05:45:19.754928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.997 [2024-11-27 05:45:19.754938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.997 [2024-11-27 05:45:19.754998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.997 [2024-11-27 05:45:19.755004] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.997 [2024-11-27 05:45:19.755007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.755010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.997 [2024-11-27 05:45:19.755014] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:31.997 [2024-11-27 05:45:19.755021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:31.997 [2024-11-27 05:45:19.755027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.997 [2024-11-27 05:45:19.755030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.998 [2024-11-27 05:45:19.755049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.998 [2024-11-27 05:45:19.755113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.998 [2024-11-27 05:45:19.755119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.998 [2024-11-27 05:45:19.755122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.998 [2024-11-27 05:45:19.755129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:31.998 [2024-11-27 05:45:19.755137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.998 [2024-11-27 05:45:19.755159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.998 [2024-11-27 05:45:19.755220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.998 [2024-11-27 05:45:19.755226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.998 [2024-11-27 05:45:19.755229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.998 [2024-11-27 05:45:19.755236] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:31.998 [2024-11-27 05:45:19.755240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:31.998 [2024-11-27 05:45:19.755247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:31.998 [2024-11-27 05:45:19.755354] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:31.998 [2024-11-27 05:45:19.755358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:31.998 [2024-11-27 05:45:19.755364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.998 [2024-11-27 05:45:19.755389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.998 [2024-11-27 05:45:19.755451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.998 [2024-11-27 05:45:19.755457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.998 [2024-11-27 05:45:19.755460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.998 [2024-11-27 05:45:19.755467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:31.998 [2024-11-27 05:45:19.755475] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.998 [2024-11-27 05:45:19.755497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.998 [2024-11-27 05:45:19.755563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.998 [2024-11-27 05:45:19.755568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.998 [2024-11-27 05:45:19.755571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.998 [2024-11-27 05:45:19.755578] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:31.998 [2024-11-27 05:45:19.755582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:31.998 [2024-11-27 05:45:19.755590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:31.998 [2024-11-27 05:45:19.755599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:31.998 [2024-11-27 05:45:19.755607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.998 [2024-11-27 05:45:19.755626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.998 [2024-11-27 05:45:19.755730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.998 [2024-11-27 05:45:19.755737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.998 [2024-11-27 05:45:19.755740] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755743] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=4096, cccid=0 00:23:31.998 [2024-11-27 05:45:19.755746] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977100) on tqpair(0x915690): expected_datao=0, payload_size=4096 00:23:31.998 [2024-11-27 05:45:19.755750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755756] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755761] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.998 [2024-11-27 05:45:19.755776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.998 [2024-11-27 05:45:19.755779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.998 [2024-11-27 05:45:19.755789] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:31.998 [2024-11-27 05:45:19.755793] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:31.998 [2024-11-27 05:45:19.755797] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:31.998 [2024-11-27 05:45:19.755800] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:31.998 [2024-11-27 05:45:19.755804] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:31.998 [2024-11-27 05:45:19.755808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:31.998 [2024-11-27 05:45:19.755816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:31.998 [2024-11-27 05:45:19.755821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:31.998 [2024-11-27 05:45:19.755843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.998 [2024-11-27 05:45:19.755903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.998 [2024-11-27 05:45:19.755909] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.998 [2024-11-27 05:45:19.755912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:31.998 [2024-11-27 05:45:19.755920] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.998 [2024-11-27 05:45:19.755937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.998 [2024-11-27 05:45:19.755953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.998 [2024-11-27 05:45:19.755969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.755977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.755981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.998 [2024-11-27 05:45:19.755986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:31.998 [2024-11-27 05:45:19.755996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:31.998 [2024-11-27 05:45:19.756001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.998 [2024-11-27 05:45:19.756004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x915690) 00:23:31.998 [2024-11-27 05:45:19.756009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.999 [2024-11-27 05:45:19.756020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977100, cid 0, qid 0 00:23:31.999 [2024-11-27 05:45:19.756025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977280, cid 1, qid 0 00:23:31.999 [2024-11-27 05:45:19.756029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977400, cid 2, qid 0 00:23:31.999 [2024-11-27 05:45:19.756033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:31.999 [2024-11-27 05:45:19.756037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977700, cid 4, qid 0 00:23:31.999 [2024-11-27 05:45:19.756131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.999 [2024-11-27 05:45:19.756136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.999 [2024-11-27 05:45:19.756139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977700) on tqpair=0x915690 00:23:31.999 [2024-11-27 05:45:19.756146] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:31.999 [2024-11-27 05:45:19.756150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.756159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.756165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.756170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x915690) 00:23:31.999 [2024-11-27 05:45:19.756181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:31.999 [2024-11-27 05:45:19.756191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977700, cid 4, qid 0 00:23:31.999 [2024-11-27 05:45:19.756258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.999 [2024-11-27 05:45:19.756263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.999 [2024-11-27 05:45:19.756266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977700) on tqpair=0x915690 00:23:31.999 [2024-11-27 05:45:19.756320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.756329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.756337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x915690) 00:23:31.999 [2024-11-27 05:45:19.756346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.999 [2024-11-27 05:45:19.756355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977700, cid 4, qid 0 00:23:31.999 [2024-11-27 05:45:19.756428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.999 [2024-11-27 05:45:19.756433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.999 [2024-11-27 05:45:19.756436] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756440] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=4096, cccid=4 00:23:31.999 [2024-11-27 05:45:19.756443] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977700) on tqpair(0x915690): expected_datao=0, payload_size=4096 00:23:31.999 [2024-11-27 05:45:19.756447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756453] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.756456] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.999 [2024-11-27 05:45:19.797744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.999 [2024-11-27 05:45:19.797747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977700) on tqpair=0x915690 00:23:31.999 [2024-11-27 05:45:19.797762] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:31.999 [2024-11-27 05:45:19.797769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.797777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.797784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x915690) 00:23:31.999 [2024-11-27 05:45:19.797794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.999 [2024-11-27 05:45:19.797806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977700, cid 4, qid 0 00:23:31.999 [2024-11-27 05:45:19.797888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.999 [2024-11-27 05:45:19.797894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.999 [2024-11-27 05:45:19.797897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797900] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=4096, cccid=4 00:23:31.999 [2024-11-27 05:45:19.797904] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977700) on tqpair(0x915690): expected_datao=0, payload_size=4096 00:23:31.999 [2024-11-27 05:45:19.797908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797913] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797917] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.999 [2024-11-27 05:45:19.797935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.999 [2024-11-27 05:45:19.797938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977700) on tqpair=0x915690 00:23:31.999 [2024-11-27 05:45:19.797952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.797960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.797967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.797970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x915690) 00:23:31.999 [2024-11-27 05:45:19.797976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.999 [2024-11-27 05:45:19.797986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977700, cid 4, qid 0 00:23:31.999 [2024-11-27 05:45:19.798059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:31.999 [2024-11-27 05:45:19.798065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:31.999 [2024-11-27 05:45:19.798068] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.798071] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=4096, cccid=4 00:23:31.999 [2024-11-27 05:45:19.798074] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977700) on tqpair(0x915690): expected_datao=0, payload_size=4096 00:23:31.999 [2024-11-27 05:45:19.798078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.798084] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.798087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.839811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.999 [2024-11-27 05:45:19.839822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.999 [2024-11-27 05:45:19.839826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.839829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977700) on tqpair=0x915690 00:23:31.999 [2024-11-27 05:45:19.839840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.839848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.839855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.839861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.839866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.839870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.839874] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:31.999 [2024-11-27 05:45:19.839879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:31.999 [2024-11-27 05:45:19.839883] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:31.999 [2024-11-27 05:45:19.839896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.839899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x915690) 00:23:31.999 [2024-11-27 05:45:19.839906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.999 [2024-11-27 05:45:19.839912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.839917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:31.999 [2024-11-27 05:45:19.839920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x915690) 00:23:31.999 [2024-11-27 05:45:19.839925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.999 [2024-11-27 05:45:19.839938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977700, cid 4, qid 0 00:23:31.999 [2024-11-27 05:45:19.839943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977880, cid 5, qid 0 00:23:31.999 [2024-11-27 05:45:19.840019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:31.999 [2024-11-27 05:45:19.840024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:31.999 [2024-11-27 05:45:19.840027] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977700) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977880) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x915690) 00:23:32.000 [2024-11-27 05:45:19.840063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.000 [2024-11-27 05:45:19.840073] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977880, cid 5, qid 0 00:23:32.000 [2024-11-27 05:45:19.840142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977880) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x915690) 00:23:32.000 [2024-11-27 05:45:19.840171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.000 [2024-11-27 05:45:19.840180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977880, cid 5, qid 0 00:23:32.000 [2024-11-27 05:45:19.840239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977880) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x915690) 00:23:32.000 [2024-11-27 05:45:19.840268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.000 [2024-11-27 05:45:19.840277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977880, cid 5, qid 0 00:23:32.000 [2024-11-27 05:45:19.840336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977880) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x915690) 00:23:32.000 [2024-11-27 05:45:19.840374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.000 [2024-11-27 05:45:19.840380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x915690) 00:23:32.000 [2024-11-27 05:45:19.840389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.000 [2024-11-27 05:45:19.840395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x915690) 00:23:32.000 [2024-11-27 05:45:19.840403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.000 [2024-11-27 05:45:19.840409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x915690) 00:23:32.000 [2024-11-27 05:45:19.840417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.000 [2024-11-27 05:45:19.840428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977880, cid 5, qid 0 00:23:32.000 [2024-11-27 05:45:19.840432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977700, cid 4, qid 0 00:23:32.000 [2024-11-27 05:45:19.840436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977a00, cid 6, qid 0 00:23:32.000 [2024-11-27 05:45:19.840440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977b80, cid 7, qid 0 00:23:32.000 [2024-11-27 05:45:19.840579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:32.000 [2024-11-27 05:45:19.840585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:32.000 [2024-11-27 05:45:19.840588] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840591] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=8192, cccid=5 00:23:32.000 [2024-11-27 05:45:19.840595] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977880) on tqpair(0x915690): expected_datao=0, payload_size=8192 00:23:32.000 [2024-11-27 05:45:19.840598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840633] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840637] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:32.000 [2024-11-27 05:45:19.840647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:32.000 [2024-11-27 05:45:19.840649] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840653] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=512, cccid=4 00:23:32.000 [2024-11-27 05:45:19.840656] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977700) on tqpair(0x915690): expected_datao=0, payload_size=512 00:23:32.000 [2024-11-27 05:45:19.840660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840665] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840668] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:32.000 [2024-11-27 05:45:19.840684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:32.000 [2024-11-27 05:45:19.840687] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840690] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=512, cccid=6 00:23:32.000 [2024-11-27 05:45:19.840694] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977a00) on tqpair(0x915690): expected_datao=0, payload_size=512 00:23:32.000 [2024-11-27 05:45:19.840698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840703] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840706] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:32.000 [2024-11-27 05:45:19.840715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:32.000 [2024-11-27 05:45:19.840718] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840720] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x915690): datao=0, datal=4096, cccid=7 00:23:32.000 [2024-11-27 05:45:19.840724] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x977b80) on tqpair(0x915690): expected_datao=0, payload_size=4096 00:23:32.000 [2024-11-27 05:45:19.840728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840733] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840736] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840748] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977880) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977700) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977a00) on tqpair=0x915690 00:23:32.000 [2024-11-27 05:45:19.840800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.000 [2024-11-27 05:45:19.840805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.000 [2024-11-27 05:45:19.840807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.000 [2024-11-27 05:45:19.840810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977b80) on tqpair=0x915690 00:23:32.000 ===================================================== 00:23:32.000 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.000 ===================================================== 00:23:32.000 Controller Capabilities/Features 00:23:32.000 ================================ 00:23:32.000 Vendor ID: 8086 00:23:32.000 Subsystem Vendor ID: 8086 00:23:32.000 Serial Number: SPDK00000000000001 00:23:32.000 Model Number: SPDK bdev Controller 00:23:32.000 Firmware Version: 25.01 00:23:32.000 Recommended Arb Burst: 6 00:23:32.000 IEEE OUI Identifier: e4 d2 5c 00:23:32.000 Multi-path I/O 00:23:32.000 May have multiple subsystem ports: Yes 00:23:32.000 May have multiple controllers: Yes 00:23:32.000 Associated with SR-IOV VF: No 00:23:32.000 Max Data Transfer Size: 131072 00:23:32.000 Max Number of Namespaces: 32 00:23:32.000 Max Number of I/O Queues: 127 00:23:32.000 NVMe Specification Version (VS): 1.3 00:23:32.000 NVMe Specification Version (Identify): 1.3 00:23:32.000 Maximum Queue Entries: 128 00:23:32.000 Contiguous Queues Required: Yes 00:23:32.001 Arbitration Mechanisms Supported 00:23:32.001 Weighted Round Robin: Not Supported 00:23:32.001 Vendor Specific: Not Supported 00:23:32.001 Reset Timeout: 15000 ms 00:23:32.001 Doorbell Stride: 4 bytes 00:23:32.001 NVM Subsystem Reset: Not Supported 00:23:32.001 Command Sets Supported 00:23:32.001 NVM Command Set: Supported 00:23:32.001 Boot Partition: Not Supported 00:23:32.001 Memory Page Size Minimum: 4096 bytes 00:23:32.001 Memory Page Size Maximum: 4096 bytes 00:23:32.001 Persistent Memory Region: Not Supported 00:23:32.001 Optional Asynchronous Events Supported 00:23:32.001 Namespace Attribute Notices: Supported 00:23:32.001 Firmware Activation Notices: Not Supported 00:23:32.001 ANA Change Notices: Not Supported 00:23:32.001 PLE Aggregate Log Change Notices: Not Supported 00:23:32.001 LBA Status Info Alert Notices: Not Supported 00:23:32.001 EGE Aggregate Log Change Notices: Not Supported 00:23:32.001 Normal NVM Subsystem Shutdown event: Not Supported 00:23:32.001 Zone Descriptor Change Notices: Not Supported 00:23:32.001 Discovery Log Change Notices: Not Supported 00:23:32.001 Controller Attributes 00:23:32.001 128-bit Host Identifier: Supported 00:23:32.001 Non-Operational Permissive Mode: Not Supported 00:23:32.001 NVM Sets: Not Supported 00:23:32.001 Read Recovery Levels: Not Supported 00:23:32.001 Endurance Groups: Not Supported 00:23:32.001 Predictable Latency Mode: Not Supported 00:23:32.001 Traffic Based Keep ALive: Not Supported 00:23:32.001 Namespace Granularity: Not Supported 00:23:32.001 SQ Associations: Not Supported 00:23:32.001 UUID List: Not Supported 00:23:32.001 Multi-Domain Subsystem: Not Supported 00:23:32.001 Fixed Capacity Management: Not Supported 00:23:32.001 Variable Capacity Management: Not Supported 00:23:32.001 Delete Endurance Group: Not Supported 00:23:32.001 Delete NVM Set: Not Supported 00:23:32.001 Extended LBA Formats Supported: Not Supported 00:23:32.001 Flexible Data Placement Supported: Not Supported 00:23:32.001 00:23:32.001 Controller Memory Buffer Support 00:23:32.001 ================================ 00:23:32.001 Supported: No 00:23:32.001 00:23:32.001 Persistent Memory Region Support 00:23:32.001 ================================ 00:23:32.001 Supported: No 00:23:32.001 00:23:32.001 Admin Command Set Attributes 00:23:32.001 ============================ 00:23:32.001 Security Send/Receive: Not Supported 00:23:32.001 Format NVM: Not Supported 00:23:32.001 Firmware Activate/Download: Not Supported 00:23:32.001 Namespace Management: Not Supported 00:23:32.001 Device Self-Test: Not Supported 00:23:32.001 Directives: Not Supported 00:23:32.001 NVMe-MI: Not Supported 00:23:32.001 Virtualization Management: Not Supported 00:23:32.001 Doorbell Buffer Config: Not Supported 00:23:32.001 Get LBA Status Capability: Not Supported 00:23:32.001 Command & Feature Lockdown Capability: Not Supported 00:23:32.001 Abort Command Limit: 4 00:23:32.001 Async Event Request Limit: 4 00:23:32.001 Number of Firmware Slots: N/A 00:23:32.001 Firmware Slot 1 Read-Only: N/A 00:23:32.001 Firmware Activation Without Reset: N/A 00:23:32.001 Multiple Update Detection Support: N/A 00:23:32.001 Firmware Update Granularity: No Information Provided 00:23:32.001 Per-Namespace SMART Log: No 00:23:32.001 Asymmetric Namespace Access Log Page: Not Supported 00:23:32.001 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:32.001 Command Effects Log Page: Supported 00:23:32.001 Get Log Page Extended Data: Supported 00:23:32.001 Telemetry Log Pages: Not Supported 00:23:32.001 Persistent Event Log Pages: Not Supported 00:23:32.001 Supported Log Pages Log Page: May Support 00:23:32.001 Commands Supported & Effects Log Page: Not Supported 00:23:32.001 Feature Identifiers & Effects Log Page:May Support 00:23:32.001 NVMe-MI Commands & Effects Log Page: May Support 00:23:32.001 Data Area 4 for Telemetry Log: Not Supported 00:23:32.001 Error Log Page Entries Supported: 128 00:23:32.001 Keep Alive: Supported 00:23:32.001 Keep Alive Granularity: 10000 ms 00:23:32.001 00:23:32.001 NVM Command Set Attributes 00:23:32.001 ========================== 00:23:32.001 Submission Queue Entry Size 00:23:32.001 Max: 64 00:23:32.001 Min: 64 00:23:32.001 Completion Queue Entry Size 00:23:32.001 Max: 16 00:23:32.001 Min: 16 00:23:32.001 Number of Namespaces: 32 00:23:32.001 Compare Command: Supported 00:23:32.001 Write Uncorrectable Command: Not Supported 00:23:32.001 Dataset Management Command: Supported 00:23:32.001 Write Zeroes Command: Supported 00:23:32.001 Set Features Save Field: Not Supported 00:23:32.001 Reservations: Supported 00:23:32.001 Timestamp: Not Supported 00:23:32.001 Copy: Supported 00:23:32.001 Volatile Write Cache: Present 00:23:32.001 Atomic Write Unit (Normal): 1 00:23:32.001 Atomic Write Unit (PFail): 1 00:23:32.001 Atomic Compare & Write Unit: 1 00:23:32.001 Fused Compare & Write: Supported 00:23:32.001 Scatter-Gather List 00:23:32.001 SGL Command Set: Supported 00:23:32.001 SGL Keyed: Supported 00:23:32.001 SGL Bit Bucket Descriptor: Not Supported 00:23:32.001 SGL Metadata Pointer: Not Supported 00:23:32.001 Oversized SGL: Not Supported 00:23:32.001 SGL Metadata Address: Not Supported 00:23:32.001 SGL Offset: Supported 00:23:32.001 Transport SGL Data Block: Not Supported 00:23:32.001 Replay Protected Memory Block: Not Supported 00:23:32.001 00:23:32.001 Firmware Slot Information 00:23:32.001 ========================= 00:23:32.001 Active slot: 1 00:23:32.001 Slot 1 Firmware Revision: 25.01 00:23:32.001 00:23:32.001 00:23:32.001 Commands Supported and Effects 00:23:32.001 ============================== 00:23:32.001 Admin Commands 00:23:32.001 -------------- 00:23:32.001 Get Log Page (02h): Supported 00:23:32.001 Identify (06h): Supported 00:23:32.001 Abort (08h): Supported 00:23:32.001 Set Features (09h): Supported 00:23:32.001 Get Features (0Ah): Supported 00:23:32.001 Asynchronous Event Request (0Ch): Supported 00:23:32.001 Keep Alive (18h): Supported 00:23:32.001 I/O Commands 00:23:32.001 ------------ 00:23:32.001 Flush (00h): Supported LBA-Change 00:23:32.001 Write (01h): Supported LBA-Change 00:23:32.001 Read (02h): Supported 00:23:32.001 Compare (05h): Supported 00:23:32.001 Write Zeroes (08h): Supported LBA-Change 00:23:32.001 Dataset Management (09h): Supported LBA-Change 00:23:32.001 Copy (19h): Supported LBA-Change 00:23:32.001 00:23:32.001 Error Log 00:23:32.001 ========= 00:23:32.001 00:23:32.001 Arbitration 00:23:32.001 =========== 00:23:32.001 Arbitration Burst: 1 00:23:32.001 00:23:32.001 Power Management 00:23:32.001 ================ 00:23:32.001 Number of Power States: 1 00:23:32.001 Current Power State: Power State #0 00:23:32.001 Power State #0: 00:23:32.001 Max Power: 0.00 W 00:23:32.001 Non-Operational State: Operational 00:23:32.001 Entry Latency: Not Reported 00:23:32.001 Exit Latency: Not Reported 00:23:32.001 Relative Read Throughput: 0 00:23:32.001 Relative Read Latency: 0 00:23:32.001 Relative Write Throughput: 0 00:23:32.001 Relative Write Latency: 0 00:23:32.001 Idle Power: Not Reported 00:23:32.001 Active Power: Not Reported 00:23:32.001 Non-Operational Permissive Mode: Not Supported 00:23:32.001 00:23:32.001 Health Information 00:23:32.001 ================== 00:23:32.001 Critical Warnings: 00:23:32.001 Available Spare Space: OK 00:23:32.001 Temperature: OK 00:23:32.001 Device Reliability: OK 00:23:32.001 Read Only: No 00:23:32.001 Volatile Memory Backup: OK 00:23:32.001 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:32.001 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:32.001 Available Spare: 0% 00:23:32.001 Available Spare Threshold: 0% 00:23:32.001 Life Percentage Used:[2024-11-27 05:45:19.840888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.001 [2024-11-27 05:45:19.840892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x915690) 00:23:32.001 [2024-11-27 05:45:19.840898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.001 [2024-11-27 05:45:19.840909] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977b80, cid 7, qid 0 00:23:32.001 [2024-11-27 05:45:19.840985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.001 [2024-11-27 05:45:19.840991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.001 [2024-11-27 05:45:19.840994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.001 [2024-11-27 05:45:19.840997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977b80) on tqpair=0x915690 00:23:32.001 [2024-11-27 05:45:19.841030] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:32.001 [2024-11-27 05:45:19.841040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977100) on tqpair=0x915690 00:23:32.001 [2024-11-27 05:45:19.841045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.001 [2024-11-27 05:45:19.841049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977280) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.002 [2024-11-27 05:45:19.841057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977400) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.002 [2024-11-27 05:45:19.841065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.002 [2024-11-27 05:45:19.841075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:32.002 [2024-11-27 05:45:19.841088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.002 [2024-11-27 05:45:19.841099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:32.002 [2024-11-27 05:45:19.841158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.002 [2024-11-27 05:45:19.841164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.002 [2024-11-27 05:45:19.841167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:32.002 [2024-11-27 05:45:19.841187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.002 [2024-11-27 05:45:19.841199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:32.002 [2024-11-27 05:45:19.841269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.002 [2024-11-27 05:45:19.841274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.002 [2024-11-27 05:45:19.841277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841284] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:32.002 [2024-11-27 05:45:19.841288] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:32.002 [2024-11-27 05:45:19.841296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:32.002 [2024-11-27 05:45:19.841308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.002 [2024-11-27 05:45:19.841319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:32.002 [2024-11-27 05:45:19.841380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.002 [2024-11-27 05:45:19.841386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.002 [2024-11-27 05:45:19.841389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:32.002 [2024-11-27 05:45:19.841412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.002 [2024-11-27 05:45:19.841421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:32.002 [2024-11-27 05:45:19.841497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.002 [2024-11-27 05:45:19.841502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.002 [2024-11-27 05:45:19.841505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:32.002 [2024-11-27 05:45:19.841529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.002 [2024-11-27 05:45:19.841538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:32.002 [2024-11-27 05:45:19.841599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.002 [2024-11-27 05:45:19.841605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.002 [2024-11-27 05:45:19.841608] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.841619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.841625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:32.002 [2024-11-27 05:45:19.841631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.002 [2024-11-27 05:45:19.841640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:32.002 [2024-11-27 05:45:19.845677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.002 [2024-11-27 05:45:19.845685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.002 [2024-11-27 05:45:19.845688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.845691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.845700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.845704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.845707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x915690) 00:23:32.002 [2024-11-27 05:45:19.845712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.002 [2024-11-27 05:45:19.845723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x977580, cid 3, qid 0 00:23:32.002 [2024-11-27 05:45:19.845883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:32.002 [2024-11-27 05:45:19.845889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:32.002 [2024-11-27 05:45:19.845892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:32.002 [2024-11-27 05:45:19.845896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x977580) on tqpair=0x915690 00:23:32.002 [2024-11-27 05:45:19.845902] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:32.002 0% 00:23:32.002 Data Units Read: 0 00:23:32.002 Data Units Written: 0 00:23:32.002 Host Read Commands: 0 00:23:32.002 Host Write Commands: 0 00:23:32.002 Controller Busy Time: 0 minutes 00:23:32.002 Power Cycles: 0 00:23:32.002 Power On Hours: 0 hours 00:23:32.002 Unsafe Shutdowns: 0 00:23:32.002 Unrecoverable Media Errors: 0 00:23:32.002 Lifetime Error Log Entries: 0 00:23:32.002 Warning Temperature Time: 0 minutes 00:23:32.002 Critical Temperature Time: 0 minutes 00:23:32.002 00:23:32.002 Number of Queues 00:23:32.002 ================ 00:23:32.002 Number of I/O Submission Queues: 127 00:23:32.002 Number of I/O Completion Queues: 127 00:23:32.002 00:23:32.002 Active Namespaces 00:23:32.002 ================= 00:23:32.002 Namespace ID:1 00:23:32.002 Error Recovery Timeout: Unlimited 00:23:32.002 Command Set Identifier: NVM (00h) 00:23:32.002 Deallocate: Supported 00:23:32.002 Deallocated/Unwritten Error: Not Supported 00:23:32.002 Deallocated Read Value: Unknown 00:23:32.002 Deallocate in Write Zeroes: Not Supported 00:23:32.002 Deallocated Guard Field: 0xFFFF 00:23:32.002 Flush: Supported 00:23:32.002 Reservation: Supported 00:23:32.002 Namespace Sharing Capabilities: Multiple Controllers 00:23:32.002 Size (in LBAs): 131072 (0GiB) 00:23:32.003 Capacity (in LBAs): 131072 (0GiB) 00:23:32.003 Utilization (in LBAs): 131072 (0GiB) 00:23:32.003 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:32.003 EUI64: ABCDEF0123456789 00:23:32.003 UUID: 7c7df39a-de95-4cfd-a29e-e4a9a8b083cf 00:23:32.003 Thin Provisioning: Not Supported 00:23:32.003 Per-NS Atomic Units: Yes 00:23:32.003 Atomic Boundary Size (Normal): 0 00:23:32.003 Atomic Boundary Size (PFail): 0 00:23:32.003 Atomic Boundary Offset: 0 00:23:32.003 Maximum Single Source Range Length: 65535 00:23:32.003 Maximum Copy Length: 65535 00:23:32.003 Maximum Source Range Count: 1 00:23:32.003 NGUID/EUI64 Never Reused: No 00:23:32.003 Namespace Write Protected: No 00:23:32.003 Number of LBA Formats: 1 00:23:32.003 Current LBA Format: LBA Format #00 00:23:32.003 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:32.003 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.003 rmmod nvme_tcp 00:23:32.003 rmmod nvme_fabrics 00:23:32.003 rmmod nvme_keyring 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1844810 ']' 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1844810 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1844810 ']' 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1844810 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.003 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1844810 00:23:32.263 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.263 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.263 05:45:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1844810' 00:23:32.263 killing process with pid 1844810 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1844810 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1844810 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.263 05:45:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.800 00:23:34.800 real 0m10.038s 00:23:34.800 user 0m8.279s 00:23:34.800 sys 0m4.890s 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:34.800 ************************************ 00:23:34.800 END TEST nvmf_identify 00:23:34.800 ************************************ 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.800 ************************************ 00:23:34.800 START TEST nvmf_perf 00:23:34.800 ************************************ 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:34.800 * Looking for test storage... 00:23:34.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:34.800 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:34.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.801 --rc genhtml_branch_coverage=1 00:23:34.801 --rc genhtml_function_coverage=1 00:23:34.801 --rc genhtml_legend=1 00:23:34.801 --rc geninfo_all_blocks=1 00:23:34.801 --rc geninfo_unexecuted_blocks=1 00:23:34.801 00:23:34.801 ' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:34.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.801 --rc genhtml_branch_coverage=1 00:23:34.801 --rc genhtml_function_coverage=1 00:23:34.801 --rc genhtml_legend=1 00:23:34.801 --rc geninfo_all_blocks=1 00:23:34.801 --rc geninfo_unexecuted_blocks=1 00:23:34.801 00:23:34.801 ' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:34.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.801 --rc genhtml_branch_coverage=1 00:23:34.801 --rc genhtml_function_coverage=1 00:23:34.801 --rc genhtml_legend=1 00:23:34.801 --rc geninfo_all_blocks=1 00:23:34.801 --rc geninfo_unexecuted_blocks=1 00:23:34.801 00:23:34.801 ' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:34.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.801 --rc genhtml_branch_coverage=1 00:23:34.801 --rc genhtml_function_coverage=1 00:23:34.801 --rc genhtml_legend=1 00:23:34.801 --rc geninfo_all_blocks=1 00:23:34.801 --rc geninfo_unexecuted_blocks=1 00:23:34.801 00:23:34.801 ' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.801 05:45:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:41.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:41.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:41.374 Found net devices under 0000:86:00.0: cvl_0_0 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.374 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:41.375 Found net devices under 0000:86:00.1: cvl_0_1 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:23:41.375 00:23:41.375 --- 10.0.0.2 ping statistics --- 00:23:41.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.375 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:23:41.375 00:23:41.375 --- 10.0.0.1 ping statistics --- 00:23:41.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.375 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1848593 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1848593 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1848593 ']' 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:41.375 [2024-11-27 05:45:28.516431] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:23:41.375 [2024-11-27 05:45:28.516479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.375 [2024-11-27 05:45:28.598653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.375 [2024-11-27 05:45:28.641699] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.375 [2024-11-27 05:45:28.641734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.375 [2024-11-27 05:45:28.641741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.375 [2024-11-27 05:45:28.641748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.375 [2024-11-27 05:45:28.641753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.375 [2024-11-27 05:45:28.643216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.375 [2024-11-27 05:45:28.643244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.375 [2024-11-27 05:45:28.643349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.375 [2024-11-27 05:45:28.643351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:41.375 05:45:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:43.910 05:45:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:43.910 05:45:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:44.169 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:44.169 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:44.428 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:44.428 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:44.428 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:44.428 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:44.428 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:44.428 [2024-11-27 05:45:32.398148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.428 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:44.687 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:44.687 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.945 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:44.945 05:45:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:45.204 05:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.463 [2024-11-27 05:45:33.210450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.464 05:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:45.464 05:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:45.464 05:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:45.464 05:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:45.464 05:45:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:46.841 Initializing NVMe Controllers 00:23:46.841 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:46.841 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:46.841 Initialization complete. Launching workers. 00:23:46.841 ======================================================== 00:23:46.841 Latency(us) 00:23:46.841 Device Information : IOPS MiB/s Average min max 00:23:46.841 PCIE (0000:5e:00.0) NSID 1 from core 0: 98149.68 383.40 325.59 14.94 5107.92 00:23:46.841 ======================================================== 00:23:46.841 Total : 98149.68 383.40 325.59 14.94 5107.92 00:23:46.841 00:23:46.841 05:45:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:48.221 Initializing NVMe Controllers 00:23:48.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:48.221 Initialization complete. Launching workers. 00:23:48.221 ======================================================== 00:23:48.221 Latency(us) 00:23:48.221 Device Information : IOPS MiB/s Average min max 00:23:48.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.00 0.30 13258.72 105.80 45705.15 00:23:48.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.00 0.20 20713.33 7199.14 47886.40 00:23:48.221 ======================================================== 00:23:48.221 Total : 127.00 0.50 16193.61 105.80 47886.40 00:23:48.221 00:23:48.221 05:45:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:49.599 Initializing NVMe Controllers 00:23:49.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.599 Initialization complete. Launching workers. 00:23:49.599 ======================================================== 00:23:49.599 Latency(us) 00:23:49.599 Device Information : IOPS MiB/s Average min max 00:23:49.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11056.70 43.19 2893.94 513.64 6268.42 00:23:49.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3908.34 15.27 8212.35 5524.54 16226.91 00:23:49.599 ======================================================== 00:23:49.599 Total : 14965.04 58.46 4282.92 513.64 16226.91 00:23:49.599 00:23:49.599 05:45:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:49.599 05:45:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:49.599 05:45:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:52.139 Initializing NVMe Controllers 00:23:52.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.139 Controller IO queue size 128, less than required. 00:23:52.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.140 Controller IO queue size 128, less than required. 00:23:52.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:52.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:52.140 Initialization complete. Launching workers. 00:23:52.140 ======================================================== 00:23:52.140 Latency(us) 00:23:52.140 Device Information : IOPS MiB/s Average min max 00:23:52.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1791.13 447.78 72778.99 45656.48 131972.91 00:23:52.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 601.38 150.34 217085.26 87942.28 319787.87 00:23:52.140 ======================================================== 00:23:52.140 Total : 2392.51 598.13 109051.54 45656.48 319787.87 00:23:52.140 00:23:52.140 05:45:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:52.140 No valid NVMe controllers or AIO or URING devices found 00:23:52.140 Initializing NVMe Controllers 00:23:52.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.140 Controller IO queue size 128, less than required. 00:23:52.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.140 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:52.140 Controller IO queue size 128, less than required. 00:23:52.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.140 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:52.140 WARNING: Some requested NVMe devices were skipped 00:23:52.140 05:45:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:54.679 Initializing NVMe Controllers 00:23:54.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.679 Controller IO queue size 128, less than required. 00:23:54.679 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.679 Controller IO queue size 128, less than required. 00:23:54.680 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.680 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:54.680 Initialization complete. Launching workers. 00:23:54.680 00:23:54.680 ==================== 00:23:54.680 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:54.680 TCP transport: 00:23:54.680 polls: 11259 00:23:54.680 idle_polls: 8029 00:23:54.680 sock_completions: 3230 00:23:54.680 nvme_completions: 6127 00:23:54.680 submitted_requests: 9202 00:23:54.680 queued_requests: 1 00:23:54.680 00:23:54.680 ==================== 00:23:54.680 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:54.680 TCP transport: 00:23:54.680 polls: 15436 00:23:54.680 idle_polls: 11632 00:23:54.680 sock_completions: 3804 00:23:54.680 nvme_completions: 6817 00:23:54.680 submitted_requests: 10322 00:23:54.680 queued_requests: 1 00:23:54.680 ======================================================== 00:23:54.680 Latency(us) 00:23:54.680 Device Information : IOPS MiB/s Average min max 00:23:54.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1530.55 382.64 85317.38 55878.17 135869.19 00:23:54.680 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1702.94 425.74 75782.24 47119.20 133636.51 00:23:54.680 ======================================================== 00:23:54.680 Total : 3233.49 808.37 80295.63 47119.20 135869.19 00:23:54.680 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.938 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.938 rmmod nvme_tcp 00:23:55.198 rmmod nvme_fabrics 00:23:55.198 rmmod nvme_keyring 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1848593 ']' 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1848593 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1848593 ']' 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1848593 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.198 05:45:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1848593 00:23:55.198 05:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:55.198 05:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:55.198 05:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1848593' 00:23:55.198 killing process with pid 1848593 00:23:55.198 05:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1848593 00:23:55.198 05:45:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1848593 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.732 05:45:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:59.638 00:23:59.638 real 0m24.859s 00:23:59.638 user 1m5.579s 00:23:59.638 sys 0m8.187s 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 ************************************ 00:23:59.638 END TEST nvmf_perf 00:23:59.638 ************************************ 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 ************************************ 00:23:59.638 START TEST nvmf_fio_host 00:23:59.638 ************************************ 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:59.638 * Looking for test storage... 00:23:59.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.638 --rc genhtml_branch_coverage=1 00:23:59.638 --rc genhtml_function_coverage=1 00:23:59.638 --rc genhtml_legend=1 00:23:59.638 --rc geninfo_all_blocks=1 00:23:59.638 --rc geninfo_unexecuted_blocks=1 00:23:59.638 00:23:59.638 ' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.638 --rc genhtml_branch_coverage=1 00:23:59.638 --rc genhtml_function_coverage=1 00:23:59.638 --rc genhtml_legend=1 00:23:59.638 --rc geninfo_all_blocks=1 00:23:59.638 --rc geninfo_unexecuted_blocks=1 00:23:59.638 00:23:59.638 ' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.638 --rc genhtml_branch_coverage=1 00:23:59.638 --rc genhtml_function_coverage=1 00:23:59.638 --rc genhtml_legend=1 00:23:59.638 --rc geninfo_all_blocks=1 00:23:59.638 --rc geninfo_unexecuted_blocks=1 00:23:59.638 00:23:59.638 ' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:59.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.638 --rc genhtml_branch_coverage=1 00:23:59.638 --rc genhtml_function_coverage=1 00:23:59.638 --rc genhtml_legend=1 00:23:59.638 --rc geninfo_all_blocks=1 00:23:59.638 --rc geninfo_unexecuted_blocks=1 00:23:59.638 00:23:59.638 ' 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.638 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:59.639 05:45:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:06.212 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:06.212 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:06.212 Found net devices under 0000:86:00.0: cvl_0_0 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:06.212 Found net devices under 0000:86:00.1: cvl_0_1 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.212 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:24:06.213 00:24:06.213 --- 10.0.0.2 ping statistics --- 00:24:06.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.213 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:24:06.213 00:24:06.213 --- 10.0.0.1 ping statistics --- 00:24:06.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.213 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1854792 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1854792 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1854792 ']' 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.213 05:45:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.213 [2024-11-27 05:45:53.492905] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:06.213 [2024-11-27 05:45:53.492954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.213 [2024-11-27 05:45:53.574206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.213 [2024-11-27 05:45:53.616237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.213 [2024-11-27 05:45:53.616276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.213 [2024-11-27 05:45:53.616283] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.213 [2024-11-27 05:45:53.616289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.213 [2024-11-27 05:45:53.616295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.213 [2024-11-27 05:45:53.617728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.213 [2024-11-27 05:45:53.617864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.213 [2024-11-27 05:45:53.617992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.213 [2024-11-27 05:45:53.617993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.472 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.472 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:06.472 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:06.732 [2024-11-27 05:45:54.515149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.732 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:06.732 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.732 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.732 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:06.991 Malloc1 00:24:06.991 05:45:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.251 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:07.251 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:07.510 [2024-11-27 05:45:55.367905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.510 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:07.770 05:45:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:08.029 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:08.029 fio-3.35 00:24:08.029 Starting 1 thread 00:24:10.563 00:24:10.563 test: (groupid=0, jobs=1): err= 0: pid=1855392: Wed Nov 27 05:45:58 2024 00:24:10.563 read: IOPS=11.8k, BW=46.0MiB/s (48.3MB/s)(92.3MiB/2005msec) 00:24:10.563 slat (nsec): min=1539, max=255112, avg=1725.18, stdev=2285.97 00:24:10.563 clat (usec): min=3082, max=10226, avg=6015.15, stdev=454.70 00:24:10.563 lat (usec): min=3117, max=10228, avg=6016.87, stdev=454.64 00:24:10.563 clat percentiles (usec): 00:24:10.563 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:24:10.563 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:24:10.563 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:24:10.564 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8291], 99.95th=[ 9503], 00:24:10.564 | 99.99th=[10159] 00:24:10.564 bw ( KiB/s): min=46384, max=47704, per=99.97%, avg=47124.00, stdev=599.11, samples=4 00:24:10.564 iops : min=11596, max=11926, avg=11781.00, stdev=149.78, samples=4 00:24:10.564 write: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(91.8MiB/2005msec); 0 zone resets 00:24:10.564 slat (nsec): min=1568, max=227212, avg=1783.16, stdev=1672.48 00:24:10.564 clat (usec): min=2435, max=9462, avg=4851.00, stdev=383.78 00:24:10.564 lat (usec): min=2450, max=9464, avg=4852.78, stdev=383.84 00:24:10.564 clat percentiles (usec): 00:24:10.564 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:24:10.564 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:24:10.564 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:24:10.564 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7701], 99.95th=[ 8717], 00:24:10.564 | 99.99th=[ 9372] 00:24:10.564 bw ( KiB/s): min=46496, max=47264, per=99.99%, avg=46880.00, stdev=314.62, samples=4 00:24:10.564 iops : min=11624, max=11816, avg=11720.00, stdev=78.66, samples=4 00:24:10.564 lat (msec) : 4=0.51%, 10=99.48%, 20=0.01% 00:24:10.564 cpu : usr=73.45%, sys=25.50%, ctx=121, majf=0, minf=2 00:24:10.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:10.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:10.564 issued rwts: total=23629,23500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:10.564 00:24:10.564 Run status group 0 (all jobs): 00:24:10.564 READ: bw=46.0MiB/s (48.3MB/s), 46.0MiB/s-46.0MiB/s (48.3MB/s-48.3MB/s), io=92.3MiB (96.8MB), run=2005-2005msec 00:24:10.564 WRITE: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=91.8MiB (96.3MB), run=2005-2005msec 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:10.564 05:45:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:10.823 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:10.823 fio-3.35 00:24:10.823 Starting 1 thread 00:24:13.357 00:24:13.357 test: (groupid=0, jobs=1): err= 0: pid=1855960: Wed Nov 27 05:46:01 2024 00:24:13.357 read: IOPS=11.0k, BW=171MiB/s (180MB/s)(343MiB/2006msec) 00:24:13.357 slat (nsec): min=2479, max=82492, avg=2792.67, stdev=1305.29 00:24:13.357 clat (usec): min=1772, max=13960, avg=6757.17, stdev=1608.51 00:24:13.357 lat (usec): min=1775, max=13962, avg=6759.96, stdev=1608.64 00:24:13.357 clat percentiles (usec): 00:24:13.357 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 4817], 20.00th=[ 5407], 00:24:13.357 | 30.00th=[ 5800], 40.00th=[ 6325], 50.00th=[ 6718], 60.00th=[ 7177], 00:24:13.357 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9634], 00:24:13.357 | 99.00th=[11076], 99.50th=[11863], 99.90th=[13304], 99.95th=[13698], 00:24:13.357 | 99.99th=[13829] 00:24:13.357 bw ( KiB/s): min=81344, max=95360, per=50.05%, avg=87744.00, stdev=5911.09, samples=4 00:24:13.357 iops : min= 5084, max= 5960, avg=5484.00, stdev=369.44, samples=4 00:24:13.357 write: IOPS=6354, BW=99.3MiB/s (104MB/s)(180MiB/1808msec); 0 zone resets 00:24:13.357 slat (usec): min=28, max=386, avg=31.52, stdev= 7.78 00:24:13.357 clat (usec): min=4842, max=14992, avg=8591.79, stdev=1485.56 00:24:13.357 lat (usec): min=4872, max=15103, avg=8623.31, stdev=1487.26 00:24:13.357 clat percentiles (usec): 00:24:13.357 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7373], 00:24:13.357 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:24:13.357 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:24:13.357 | 99.00th=[12387], 99.50th=[13042], 99.90th=[14615], 99.95th=[14746], 00:24:13.357 | 99.99th=[14877] 00:24:13.357 bw ( KiB/s): min=85792, max=99200, per=89.98%, avg=91480.00, stdev=5994.08, samples=4 00:24:13.357 iops : min= 5362, max= 6200, avg=5717.50, stdev=374.63, samples=4 00:24:13.357 lat (msec) : 2=0.02%, 4=1.88%, 10=89.31%, 20=8.78% 00:24:13.357 cpu : usr=85.89%, sys=13.42%, ctx=46, majf=0, minf=2 00:24:13.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:13.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:13.357 issued rwts: total=21978,11489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:13.357 00:24:13.357 Run status group 0 (all jobs): 00:24:13.357 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=343MiB (360MB), run=2006-2006msec 00:24:13.357 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=180MiB (188MB), run=1808-1808msec 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:13.357 rmmod nvme_tcp 00:24:13.357 rmmod nvme_fabrics 00:24:13.357 rmmod nvme_keyring 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:13.357 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:13.358 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:13.358 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1854792 ']' 00:24:13.358 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1854792 00:24:13.358 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1854792 ']' 00:24:13.358 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1854792 00:24:13.358 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1854792 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1854792' 00:24:13.616 killing process with pid 1854792 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1854792 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1854792 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:13.616 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:13.617 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:13.617 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:13.617 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.617 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.617 05:46:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.153 00:24:16.153 real 0m16.406s 00:24:16.153 user 0m49.008s 00:24:16.153 sys 0m6.558s 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 ************************************ 00:24:16.153 END TEST nvmf_fio_host 00:24:16.153 ************************************ 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.153 ************************************ 00:24:16.153 START TEST nvmf_failover 00:24:16.153 ************************************ 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:16.153 * Looking for test storage... 00:24:16.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.153 --rc genhtml_branch_coverage=1 00:24:16.153 --rc genhtml_function_coverage=1 00:24:16.153 --rc genhtml_legend=1 00:24:16.153 --rc geninfo_all_blocks=1 00:24:16.153 --rc geninfo_unexecuted_blocks=1 00:24:16.153 00:24:16.153 ' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.153 --rc genhtml_branch_coverage=1 00:24:16.153 --rc genhtml_function_coverage=1 00:24:16.153 --rc genhtml_legend=1 00:24:16.153 --rc geninfo_all_blocks=1 00:24:16.153 --rc geninfo_unexecuted_blocks=1 00:24:16.153 00:24:16.153 ' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.153 --rc genhtml_branch_coverage=1 00:24:16.153 --rc genhtml_function_coverage=1 00:24:16.153 --rc genhtml_legend=1 00:24:16.153 --rc geninfo_all_blocks=1 00:24:16.153 --rc geninfo_unexecuted_blocks=1 00:24:16.153 00:24:16.153 ' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:16.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.153 --rc genhtml_branch_coverage=1 00:24:16.153 --rc genhtml_function_coverage=1 00:24:16.153 --rc genhtml_legend=1 00:24:16.153 --rc geninfo_all_blocks=1 00:24:16.153 --rc geninfo_unexecuted_blocks=1 00:24:16.153 00:24:16.153 ' 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.153 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.154 05:46:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:22.821 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:22.821 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:22.821 Found net devices under 0000:86:00.0: cvl_0_0 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:22.821 Found net devices under 0000:86:00.1: cvl_0_1 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.821 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:24:22.822 00:24:22.822 --- 10.0.0.2 ping statistics --- 00:24:22.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.822 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:24:22.822 00:24:22.822 --- 10.0.0.1 ping statistics --- 00:24:22.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.822 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1859843 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1859843 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1859843 ']' 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.822 05:46:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.822 [2024-11-27 05:46:09.914975] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:22.822 [2024-11-27 05:46:09.915027] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.822 [2024-11-27 05:46:09.995319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:22.822 [2024-11-27 05:46:10.045491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.822 [2024-11-27 05:46:10.045530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.822 [2024-11-27 05:46:10.045537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.822 [2024-11-27 05:46:10.045543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.822 [2024-11-27 05:46:10.045548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.822 [2024-11-27 05:46:10.046839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.822 [2024-11-27 05:46:10.046943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.822 [2024-11-27 05:46:10.046945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.822 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.822 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:22.822 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.822 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.822 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.822 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.822 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:23.157 [2024-11-27 05:46:10.964010] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.157 05:46:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:23.415 Malloc0 00:24:23.415 05:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.674 05:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.674 05:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.932 [2024-11-27 05:46:11.783729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.932 05:46:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.191 [2024-11-27 05:46:11.976269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:24.191 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:24.191 [2024-11-27 05:46:12.160857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:24.191 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1860208 00:24:24.191 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1860208 /var/tmp/bdevperf.sock 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1860208 ']' 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.192 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:24.451 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.451 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:24.451 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:24.710 NVMe0n1 00:24:24.710 05:46:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:25.279 00:24:25.279 05:46:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1860435 00:24:25.279 05:46:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.279 05:46:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:26.214 05:46:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.474 [2024-11-27 05:46:14.220993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.474 [2024-11-27 05:46:14.221433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.475 [2024-11-27 05:46:14.221438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19342d0 is same with the state(6) to be set 00:24:26.475 05:46:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:29.766 05:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:29.766 00:24:29.766 05:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.025 05:46:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:33.314 05:46:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.314 [2024-11-27 05:46:21.050726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.314 05:46:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:34.251 05:46:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:34.510 [2024-11-27 05:46:22.265408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.510 [2024-11-27 05:46:22.265523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.265999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.266004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.266011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.266017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.511 [2024-11-27 05:46:22.266023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.512 [2024-11-27 05:46:22.266029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1935ce0 is same with the state(6) to be set 00:24:34.512 05:46:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1860435 00:24:41.088 { 00:24:41.088 "results": [ 00:24:41.088 { 00:24:41.088 "job": "NVMe0n1", 00:24:41.088 "core_mask": "0x1", 00:24:41.088 "workload": "verify", 00:24:41.088 "status": "finished", 00:24:41.088 "verify_range": { 00:24:41.088 "start": 0, 00:24:41.088 "length": 16384 00:24:41.088 }, 00:24:41.088 "queue_depth": 128, 00:24:41.088 "io_size": 4096, 00:24:41.088 "runtime": 15.00484, 00:24:41.088 "iops": 11252.769106501635, 00:24:41.088 "mibps": 43.95612932227201, 00:24:41.088 "io_failed": 6397, 00:24:41.088 "io_timeout": 0, 00:24:41.088 "avg_latency_us": 10937.365524823626, 00:24:41.088 "min_latency_us": 417.40190476190475, 00:24:41.088 "max_latency_us": 22594.31619047619 00:24:41.088 } 00:24:41.088 ], 00:24:41.088 "core_count": 1 00:24:41.088 } 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1860208 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1860208 ']' 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1860208 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1860208 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1860208' 00:24:41.088 killing process with pid 1860208 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1860208 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1860208 00:24:41.088 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.088 [2024-11-27 05:46:12.236297] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:41.088 [2024-11-27 05:46:12.236354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860208 ] 00:24:41.088 [2024-11-27 05:46:12.310748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.088 [2024-11-27 05:46:12.352103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.088 Running I/O for 15 seconds... 00:24:41.088 11155.00 IOPS, 43.57 MiB/s [2024-11-27T04:46:29.092Z] [2024-11-27 05:46:14.221722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.088 [2024-11-27 05:46:14.221892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-11-27 05:46:14.221899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.221907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.221914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.221927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.221934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.221942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.221948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.221957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.221963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.221971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.221977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.221985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.221991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.221999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.089 [2024-11-27 05:46:14.222350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-11-27 05:46:14.222357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-11-27 05:46:14.222804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-11-27 05:46:14.222819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-11-27 05:46:14.222834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-11-27 05:46:14.222842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.222987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.222994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-11-27 05:46:14.223275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.091 [2024-11-27 05:46:14.223283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-11-27 05:46:14.223622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.092 [2024-11-27 05:46:14.223663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.092 [2024-11-27 05:46:14.223673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:24:41.092 [2024-11-27 05:46:14.223682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223726] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:41.092 [2024-11-27 05:46:14.223747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.092 [2024-11-27 05:46:14.223754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.092 [2024-11-27 05:46:14.223768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.092 [2024-11-27 05:46:14.223782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.092 [2024-11-27 05:46:14.223795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.092 [2024-11-27 05:46:14.223802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:41.092 [2024-11-27 05:46:14.223838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2370 (9): Bad file descriptor 00:24:41.092 [2024-11-27 05:46:14.226627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:41.092 [2024-11-27 05:46:14.256987] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:41.092 11092.50 IOPS, 43.33 MiB/s [2024-11-27T04:46:29.096Z] 11130.67 IOPS, 43.48 MiB/s [2024-11-27T04:46:29.096Z] 11187.00 IOPS, 43.70 MiB/s [2024-11-27T04:46:29.097Z] [2024-11-27 05:46:17.842546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-11-27 05:46:17.842590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:43792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:43832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.842992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.842999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.843007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.843013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.093 [2024-11-27 05:46:17.843023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-11-27 05:46:17.843031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:43944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:43952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-11-27 05:46:17.843438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-11-27 05:46:17.843445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-11-27 05:46:17.843787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-11-27 05:46:17.843802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-11-27 05:46:17.843816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-11-27 05:46:17.843831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-11-27 05:46:17.843846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-11-27 05:46:17.843860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-11-27 05:46:17.843876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-11-27 05:46:17.843904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.095 [2024-11-27 05:46:17.843912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.843918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.843926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.843932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.843940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.843947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.843955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.843962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.843970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.843977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.843985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.843991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.843999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-11-27 05:46:17.844201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.096 [2024-11-27 05:46:17.844369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-11-27 05:46:17.844376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:17.844390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:17.844405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:17.844420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:17.844433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:17.844448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:17.844463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:17.844477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.097 [2024-11-27 05:46:17.844507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43648 len:8 PRP1 0x0 PRP2 0x0 00:24:41.097 [2024-11-27 05:46:17.844513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.097 [2024-11-27 05:46:17.844527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.097 [2024-11-27 05:46:17.844534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43656 len:8 PRP1 0x0 PRP2 0x0 00:24:41.097 [2024-11-27 05:46:17.844542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844585] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:41.097 [2024-11-27 05:46:17.844606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.097 [2024-11-27 05:46:17.844613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.097 [2024-11-27 05:46:17.844627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.097 [2024-11-27 05:46:17.844640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.097 [2024-11-27 05:46:17.844653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:17.844660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:41.097 [2024-11-27 05:46:17.844688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2370 (9): Bad file descriptor 00:24:41.097 [2024-11-27 05:46:17.847445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:41.097 [2024-11-27 05:46:17.911295] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:41.097 11077.60 IOPS, 43.27 MiB/s [2024-11-27T04:46:29.101Z] 11133.17 IOPS, 43.49 MiB/s [2024-11-27T04:46:29.101Z] 11174.71 IOPS, 43.65 MiB/s [2024-11-27T04:46:29.101Z] 11198.38 IOPS, 43.74 MiB/s [2024-11-27T04:46:29.101Z] 11200.33 IOPS, 43.75 MiB/s [2024-11-27T04:46:29.101Z] [2024-11-27 05:46:22.267851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.267885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.267901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.267909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.267917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.267924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.267932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.267938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.267947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.267954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.267966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.267973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.267981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.267988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.267996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.268010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.268024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.268039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.268053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.268067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.268081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-11-27 05:46:22.268095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-11-27 05:46:22.268101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-11-27 05:46:22.268115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-11-27 05:46:22.268131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-11-27 05:46:22.268147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-11-27 05:46:22.268161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-11-27 05:46:22.268177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-11-27 05:46:22.268191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-11-27 05:46:22.268205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.098 [2024-11-27 05:46:22.268554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-11-27 05:46:22.268560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-11-27 05:46:22.268602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-11-27 05:46:22.268908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.099 [2024-11-27 05:46:22.268916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.268922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.268930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.268936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.268944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.268950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.268958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.268964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.268971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.268978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.268986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.268992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.269006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.269020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.269034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.269048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-11-27 05:46:22.269065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.100 [2024-11-27 05:46:22.269380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.100 [2024-11-27 05:46:22.269385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:24:41.100 [2024-11-27 05:46:22.269391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.100 [2024-11-27 05:46:22.269398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72120 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72128 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72136 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72144 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72152 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72160 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72168 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72176 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72184 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72192 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72200 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72208 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72216 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72224 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.269744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.269749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.269754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72232 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.269760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.281607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.281617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.281623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72240 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.281630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.281637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.281642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.281652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72248 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.281659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.281665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.281673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.281679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72256 len:8 PRP1 0x0 PRP2 0x0 00:24:41.101 [2024-11-27 05:46:22.281685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-11-27 05:46:22.281692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.101 [2024-11-27 05:46:22.281698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.101 [2024-11-27 05:46:22.281703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72264 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72272 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72280 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72288 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72296 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72304 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72312 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72320 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72328 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72336 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72344 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72352 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72360 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.281980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.281987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:41.102 [2024-11-27 05:46:22.281991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:41.102 [2024-11-27 05:46:22.281997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72368 len:8 PRP1 0x0 PRP2 0x0 00:24:41.102 [2024-11-27 05:46:22.282004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.282048] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:41.102 [2024-11-27 05:46:22.282070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.102 [2024-11-27 05:46:22.282078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.282085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.102 [2024-11-27 05:46:22.282092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.282098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.102 [2024-11-27 05:46:22.282105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.282112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.102 [2024-11-27 05:46:22.282118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.102 [2024-11-27 05:46:22.282125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:41.102 [2024-11-27 05:46:22.282156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f2370 (9): Bad file descriptor 00:24:41.102 [2024-11-27 05:46:22.285242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:41.102 [2024-11-27 05:46:22.314917] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:41.102 11182.50 IOPS, 43.68 MiB/s [2024-11-27T04:46:29.106Z] 11200.00 IOPS, 43.75 MiB/s [2024-11-27T04:46:29.106Z] 11222.08 IOPS, 43.84 MiB/s [2024-11-27T04:46:29.106Z] 11233.15 IOPS, 43.88 MiB/s [2024-11-27T04:46:29.106Z] 11243.36 IOPS, 43.92 MiB/s [2024-11-27T04:46:29.106Z] 11253.13 IOPS, 43.96 MiB/s 00:24:41.102 Latency(us) 00:24:41.102 [2024-11-27T04:46:29.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.103 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:41.103 Verification LBA range: start 0x0 length 0x4000 00:24:41.103 NVMe0n1 : 15.00 11252.77 43.96 426.33 0.00 10937.37 417.40 22594.32 00:24:41.103 [2024-11-27T04:46:29.107Z] =================================================================================================================== 00:24:41.103 [2024-11-27T04:46:29.107Z] Total : 11252.77 43.96 426.33 0.00 10937.37 417.40 22594.32 00:24:41.103 Received shutdown signal, test time was about 15.000000 seconds 00:24:41.103 00:24:41.103 Latency(us) 00:24:41.103 [2024-11-27T04:46:29.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.103 [2024-11-27T04:46:29.107Z] =================================================================================================================== 00:24:41.103 [2024-11-27T04:46:29.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1862925 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1862925 /var/tmp/bdevperf.sock 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1862925 ']' 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:41.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:41.103 [2024-11-27 05:46:28.828968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:41.103 05:46:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:41.103 [2024-11-27 05:46:29.013441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:41.103 05:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:41.362 NVMe0n1 00:24:41.362 05:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:41.930 00:24:41.930 05:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:42.189 00:24:42.189 05:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:42.189 05:46:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:42.189 05:46:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.447 05:46:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:45.738 05:46:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.738 05:46:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:45.738 05:46:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1863666 00:24:45.738 05:46:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.738 05:46:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1863666 00:24:46.674 { 00:24:46.674 "results": [ 00:24:46.674 { 00:24:46.674 "job": "NVMe0n1", 00:24:46.674 "core_mask": "0x1", 00:24:46.674 "workload": "verify", 00:24:46.674 "status": "finished", 00:24:46.674 "verify_range": { 00:24:46.674 "start": 0, 00:24:46.674 "length": 16384 00:24:46.674 }, 00:24:46.674 "queue_depth": 128, 00:24:46.674 "io_size": 4096, 00:24:46.674 "runtime": 1.007819, 00:24:46.674 "iops": 11159.7419774781, 00:24:46.674 "mibps": 43.592742099523825, 00:24:46.674 "io_failed": 0, 00:24:46.674 "io_timeout": 0, 00:24:46.674 "avg_latency_us": 11423.054075626516, 00:24:46.674 "min_latency_us": 1162.4838095238094, 00:24:46.674 "max_latency_us": 13918.598095238096 00:24:46.674 } 00:24:46.674 ], 00:24:46.674 "core_count": 1 00:24:46.674 } 00:24:46.933 05:46:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:46.933 [2024-11-27 05:46:28.454027] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:24:46.933 [2024-11-27 05:46:28.454084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862925 ] 00:24:46.933 [2024-11-27 05:46:28.532057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.933 [2024-11-27 05:46:28.570013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.933 [2024-11-27 05:46:30.347016] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:46.933 [2024-11-27 05:46:30.347079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.933 [2024-11-27 05:46:30.347091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.933 [2024-11-27 05:46:30.347100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.933 [2024-11-27 05:46:30.347107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.933 [2024-11-27 05:46:30.347115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.933 [2024-11-27 05:46:30.347122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.933 [2024-11-27 05:46:30.347129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.933 [2024-11-27 05:46:30.347136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.933 [2024-11-27 05:46:30.347144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:46.933 [2024-11-27 05:46:30.347172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:46.933 [2024-11-27 05:46:30.347188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1631370 (9): Bad file descriptor 00:24:46.933 [2024-11-27 05:46:30.367757] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:46.933 Running I/O for 1 seconds... 00:24:46.933 11111.00 IOPS, 43.40 MiB/s 00:24:46.933 Latency(us) 00:24:46.933 [2024-11-27T04:46:34.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.933 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:46.933 Verification LBA range: start 0x0 length 0x4000 00:24:46.933 NVMe0n1 : 1.01 11159.74 43.59 0.00 0.00 11423.05 1162.48 13918.60 00:24:46.933 [2024-11-27T04:46:34.937Z] =================================================================================================================== 00:24:46.933 [2024-11-27T04:46:34.937Z] Total : 11159.74 43.59 0.00 0.00 11423.05 1162.48 13918.60 00:24:46.933 05:46:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.933 05:46:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:46.933 05:46:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:47.192 05:46:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:47.192 05:46:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:47.450 05:46:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:47.709 05:46:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1862925 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1862925 ']' 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1862925 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862925 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862925' 00:24:51.009 killing process with pid 1862925 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1862925 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1862925 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:51.009 05:46:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.268 rmmod nvme_tcp 00:24:51.268 rmmod nvme_fabrics 00:24:51.268 rmmod nvme_keyring 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1859843 ']' 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1859843 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1859843 ']' 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1859843 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1859843 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1859843' 00:24:51.268 killing process with pid 1859843 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1859843 00:24:51.268 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1859843 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.527 05:46:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.065 00:24:54.065 real 0m37.733s 00:24:54.065 user 1m59.248s 00:24:54.065 sys 0m7.954s 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:54.065 ************************************ 00:24:54.065 END TEST nvmf_failover 00:24:54.065 ************************************ 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.065 ************************************ 00:24:54.065 START TEST nvmf_host_discovery 00:24:54.065 ************************************ 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:54.065 * Looking for test storage... 00:24:54.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.065 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.066 --rc genhtml_branch_coverage=1 00:24:54.066 --rc genhtml_function_coverage=1 00:24:54.066 --rc genhtml_legend=1 00:24:54.066 --rc geninfo_all_blocks=1 00:24:54.066 --rc geninfo_unexecuted_blocks=1 00:24:54.066 00:24:54.066 ' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.066 --rc genhtml_branch_coverage=1 00:24:54.066 --rc genhtml_function_coverage=1 00:24:54.066 --rc genhtml_legend=1 00:24:54.066 --rc geninfo_all_blocks=1 00:24:54.066 --rc geninfo_unexecuted_blocks=1 00:24:54.066 00:24:54.066 ' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.066 --rc genhtml_branch_coverage=1 00:24:54.066 --rc genhtml_function_coverage=1 00:24:54.066 --rc genhtml_legend=1 00:24:54.066 --rc geninfo_all_blocks=1 00:24:54.066 --rc geninfo_unexecuted_blocks=1 00:24:54.066 00:24:54.066 ' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:54.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.066 --rc genhtml_branch_coverage=1 00:24:54.066 --rc genhtml_function_coverage=1 00:24:54.066 --rc genhtml_legend=1 00:24:54.066 --rc geninfo_all_blocks=1 00:24:54.066 --rc geninfo_unexecuted_blocks=1 00:24:54.066 00:24:54.066 ' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.066 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.067 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.067 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.067 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.067 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.067 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.067 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.067 05:46:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.639 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:00.640 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:00.640 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:00.640 Found net devices under 0000:86:00.0: cvl_0_0 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:00.640 Found net devices under 0000:86:00.1: cvl_0_1 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:25:00.640 00:25:00.640 --- 10.0.0.2 ping statistics --- 00:25:00.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.640 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:25:00.640 00:25:00.640 --- 10.0.0.1 ping statistics --- 00:25:00.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.640 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1868109 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1868109 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1868109 ']' 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.640 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.641 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.641 05:46:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.641 [2024-11-27 05:46:47.780369] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:00.641 [2024-11-27 05:46:47.780421] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.641 [2024-11-27 05:46:47.860644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.641 [2024-11-27 05:46:47.900166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.641 [2024-11-27 05:46:47.900203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.641 [2024-11-27 05:46:47.900210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.641 [2024-11-27 05:46:47.900216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.641 [2024-11-27 05:46:47.900221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.641 [2024-11-27 05:46:47.900786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.641 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.641 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:00.641 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.641 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.641 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.900 [2024-11-27 05:46:48.654564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.900 [2024-11-27 05:46:48.666764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.900 null0 00:25:00.900 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.901 null1 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1868354 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1868354 /tmp/host.sock 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1868354 ']' 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:00.901 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.901 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.901 [2024-11-27 05:46:48.741553] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:00.901 [2024-11-27 05:46:48.741596] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1868354 ] 00:25:00.901 [2024-11-27 05:46:48.815957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.901 [2024-11-27 05:46:48.860852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.160 05:46:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:01.160 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.161 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.161 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.161 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.161 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.161 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.161 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.418 [2024-11-27 05:46:49.284359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:01.418 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:01.676 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:01.677 05:46:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:02.244 [2024-11-27 05:46:50.023830] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:02.244 [2024-11-27 05:46:50.023853] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:02.244 [2024-11-27 05:46:50.023872] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:02.244 [2024-11-27 05:46:50.110132] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:02.244 [2024-11-27 05:46:50.171936] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:02.244 [2024-11-27 05:46:50.172781] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x182ee30:1 started. 00:25:02.244 [2024-11-27 05:46:50.174190] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:02.244 [2024-11-27 05:46:50.174206] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:02.244 [2024-11-27 05:46:50.181241] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x182ee30 was disconnected and freed. delete nvme_qpair. 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.502 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:02.762 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.763 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.023 [2024-11-27 05:46:50.840462] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x182f2f0:1 started. 00:25:03.023 [2024-11-27 05:46:50.842704] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x182f2f0 was disconnected and freed. delete nvme_qpair. 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.023 [2024-11-27 05:46:50.920729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.023 [2024-11-27 05:46:50.921735] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:03.023 [2024-11-27 05:46:50.921753] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.023 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.024 05:46:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.024 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.283 [2024-11-27 05:46:51.048468] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:03.283 05:46:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:03.283 [2024-11-27 05:46:51.108012] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:03.283 [2024-11-27 05:46:51.108046] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:03.283 [2024-11-27 05:46:51.108054] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:03.283 [2024-11-27 05:46:51.108058] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.221 [2024-11-27 05:46:52.177064] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:04.221 [2024-11-27 05:46:52.177085] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:04.221 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:04.221 [2024-11-27 05:46:52.184797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.221 [2024-11-27 05:46:52.184816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.221 [2024-11-27 05:46:52.184824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.221 [2024-11-27 05:46:52.184831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.221 [2024-11-27 05:46:52.184838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.222 [2024-11-27 05:46:52.184845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.222 [2024-11-27 05:46:52.184852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.222 [2024-11-27 05:46:52.184858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.222 [2024-11-27 05:46:52.184864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff390 is same with the state(6) to be set 00:25:04.222 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.222 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.222 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.222 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.222 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.222 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.222 [2024-11-27 05:46:52.194810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff390 (9): Bad file descriptor 00:25:04.222 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.222 [2024-11-27 05:46:52.204844] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.222 [2024-11-27 05:46:52.204855] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.222 [2024-11-27 05:46:52.204860] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.222 [2024-11-27 05:46:52.204865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.222 [2024-11-27 05:46:52.204884] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.222 [2024-11-27 05:46:52.205010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-11-27 05:46:52.205022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff390 with addr=10.0.0.2, port=4420 00:25:04.222 [2024-11-27 05:46:52.205029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff390 is same with the state(6) to be set 00:25:04.222 [2024-11-27 05:46:52.205040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff390 (9): Bad file descriptor 00:25:04.222 [2024-11-27 05:46:52.205049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.222 [2024-11-27 05:46:52.205056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.222 [2024-11-27 05:46:52.205063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.222 [2024-11-27 05:46:52.205068] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.222 [2024-11-27 05:46:52.205073] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.222 [2024-11-27 05:46:52.205077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.222 [2024-11-27 05:46:52.214914] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.222 [2024-11-27 05:46:52.214924] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.222 [2024-11-27 05:46:52.214928] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.222 [2024-11-27 05:46:52.214932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.222 [2024-11-27 05:46:52.214945] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.222 [2024-11-27 05:46:52.215184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.222 [2024-11-27 05:46:52.215196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff390 with addr=10.0.0.2, port=4420 00:25:04.222 [2024-11-27 05:46:52.215202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff390 is same with the state(6) to be set 00:25:04.222 [2024-11-27 05:46:52.215212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff390 (9): Bad file descriptor 00:25:04.222 [2024-11-27 05:46:52.215228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.222 [2024-11-27 05:46:52.215234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.222 [2024-11-27 05:46:52.215241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.222 [2024-11-27 05:46:52.215246] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.222 [2024-11-27 05:46:52.215250] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.222 [2024-11-27 05:46:52.215254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.482 [2024-11-27 05:46:52.224976] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.482 [2024-11-27 05:46:52.224989] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.482 [2024-11-27 05:46:52.224993] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.225000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.482 [2024-11-27 05:46:52.225014] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.225297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.482 [2024-11-27 05:46:52.225309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff390 with addr=10.0.0.2, port=4420 00:25:04.482 [2024-11-27 05:46:52.225316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff390 is same with the state(6) to be set 00:25:04.482 [2024-11-27 05:46:52.225327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff390 (9): Bad file descriptor 00:25:04.482 [2024-11-27 05:46:52.225347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.482 [2024-11-27 05:46:52.225355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.482 [2024-11-27 05:46:52.225361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.482 [2024-11-27 05:46:52.225366] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.482 [2024-11-27 05:46:52.225370] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.482 [2024-11-27 05:46:52.225374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:04.482 [2024-11-27 05:46:52.235043] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.482 [2024-11-27 05:46:52.235055] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.482 [2024-11-27 05:46:52.235058] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.235062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.482 [2024-11-27 05:46:52.235075] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.235226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.482 [2024-11-27 05:46:52.235237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff390 with addr=10.0.0.2, port=4420 00:25:04.482 [2024-11-27 05:46:52.235244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff390 is same with the state(6) to be set 00:25:04.482 [2024-11-27 05:46:52.235253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff390 (9): Bad file descriptor 00:25:04.482 [2024-11-27 05:46:52.235263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.482 [2024-11-27 05:46:52.235268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.482 [2024-11-27 05:46:52.235279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.482 [2024-11-27 05:46:52.235284] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.482 [2024-11-27 05:46:52.235288] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.482 [2024-11-27 05:46:52.235292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.482 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.482 [2024-11-27 05:46:52.245105] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.482 [2024-11-27 05:46:52.245118] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.482 [2024-11-27 05:46:52.245123] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.245126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.482 [2024-11-27 05:46:52.245140] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.245330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.482 [2024-11-27 05:46:52.245342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff390 with addr=10.0.0.2, port=4420 00:25:04.482 [2024-11-27 05:46:52.245349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff390 is same with the state(6) to be set 00:25:04.482 [2024-11-27 05:46:52.245359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff390 (9): Bad file descriptor 00:25:04.482 [2024-11-27 05:46:52.245368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.482 [2024-11-27 05:46:52.245374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.482 [2024-11-27 05:46:52.245381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.482 [2024-11-27 05:46:52.245386] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.482 [2024-11-27 05:46:52.245390] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.482 [2024-11-27 05:46:52.245394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.482 [2024-11-27 05:46:52.255170] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:04.482 [2024-11-27 05:46:52.255180] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:04.482 [2024-11-27 05:46:52.255184] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.255188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.482 [2024-11-27 05:46:52.255200] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:04.482 [2024-11-27 05:46:52.255395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.482 [2024-11-27 05:46:52.255410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ff390 with addr=10.0.0.2, port=4420 00:25:04.482 [2024-11-27 05:46:52.255417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ff390 is same with the state(6) to be set 00:25:04.483 [2024-11-27 05:46:52.255427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ff390 (9): Bad file descriptor 00:25:04.483 [2024-11-27 05:46:52.255436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:04.483 [2024-11-27 05:46:52.255442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:04.483 [2024-11-27 05:46:52.255449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:04.483 [2024-11-27 05:46:52.255454] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:04.483 [2024-11-27 05:46:52.255458] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:04.483 [2024-11-27 05:46:52.255462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:04.483 [2024-11-27 05:46:52.263066] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:04.483 [2024-11-27 05:46:52.263081] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.483 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.743 05:46:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.680 [2024-11-27 05:46:53.601145] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:05.680 [2024-11-27 05:46:53.601161] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:05.680 [2024-11-27 05:46:53.601171] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.939 [2024-11-27 05:46:53.688446] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:05.940 [2024-11-27 05:46:53.788184] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:05.940 [2024-11-27 05:46:53.788753] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1839570:1 started. 00:25:05.940 [2024-11-27 05:46:53.790298] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:05.940 [2024-11-27 05:46:53.790324] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.940 [2024-11-27 05:46:53.791759] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1839570 was disconnected and freed. delete nvme_qpair. 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.940 request: 00:25:05.940 { 00:25:05.940 "name": "nvme", 00:25:05.940 "trtype": "tcp", 00:25:05.940 "traddr": "10.0.0.2", 00:25:05.940 "adrfam": "ipv4", 00:25:05.940 "trsvcid": "8009", 00:25:05.940 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:05.940 "wait_for_attach": true, 00:25:05.940 "method": "bdev_nvme_start_discovery", 00:25:05.940 "req_id": 1 00:25:05.940 } 00:25:05.940 Got JSON-RPC error response 00:25:05.940 response: 00:25:05.940 { 00:25:05.940 "code": -17, 00:25:05.940 "message": "File exists" 00:25:05.940 } 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.940 request: 00:25:05.940 { 00:25:05.940 "name": "nvme_second", 00:25:05.940 "trtype": "tcp", 00:25:05.940 "traddr": "10.0.0.2", 00:25:05.940 "adrfam": "ipv4", 00:25:05.940 "trsvcid": "8009", 00:25:05.940 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:05.940 "wait_for_attach": true, 00:25:05.940 "method": "bdev_nvme_start_discovery", 00:25:05.940 "req_id": 1 00:25:05.940 } 00:25:05.940 Got JSON-RPC error response 00:25:05.940 response: 00:25:05.940 { 00:25:05.940 "code": -17, 00:25:05.940 "message": "File exists" 00:25:05.940 } 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.940 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.199 05:46:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.199 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:06.200 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.200 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:06.200 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.200 05:46:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.135 [2024-11-27 05:46:55.033764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:07.135 [2024-11-27 05:46:55.033789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fe0a0 with addr=10.0.0.2, port=8010 00:25:07.135 [2024-11-27 05:46:55.033802] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:07.135 [2024-11-27 05:46:55.033809] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:07.135 [2024-11-27 05:46:55.033815] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:08.071 [2024-11-27 05:46:56.036208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.071 [2024-11-27 05:46:56.036232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17fe0a0 with addr=10.0.0.2, port=8010 00:25:08.071 [2024-11-27 05:46:56.036243] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:08.071 [2024-11-27 05:46:56.036249] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:08.071 [2024-11-27 05:46:56.036254] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:09.449 [2024-11-27 05:46:57.038368] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:09.449 request: 00:25:09.449 { 00:25:09.449 "name": "nvme_second", 00:25:09.449 "trtype": "tcp", 00:25:09.449 "traddr": "10.0.0.2", 00:25:09.449 "adrfam": "ipv4", 00:25:09.449 "trsvcid": "8010", 00:25:09.449 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:09.449 "wait_for_attach": false, 00:25:09.449 "attach_timeout_ms": 3000, 00:25:09.449 "method": "bdev_nvme_start_discovery", 00:25:09.449 "req_id": 1 00:25:09.449 } 00:25:09.449 Got JSON-RPC error response 00:25:09.449 response: 00:25:09.449 { 00:25:09.449 "code": -110, 00:25:09.449 "message": "Connection timed out" 00:25:09.449 } 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1868354 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.449 rmmod nvme_tcp 00:25:09.449 rmmod nvme_fabrics 00:25:09.449 rmmod nvme_keyring 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1868109 ']' 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1868109 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1868109 ']' 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1868109 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1868109 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1868109' 00:25:09.449 killing process with pid 1868109 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1868109 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1868109 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.449 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.450 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.450 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.450 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.450 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.450 05:46:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.981 00:25:11.981 real 0m17.891s 00:25:11.981 user 0m21.277s 00:25:11.981 sys 0m5.942s 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.981 ************************************ 00:25:11.981 END TEST nvmf_host_discovery 00:25:11.981 ************************************ 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.981 ************************************ 00:25:11.981 START TEST nvmf_host_multipath_status 00:25:11.981 ************************************ 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:11.981 * Looking for test storage... 00:25:11.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:11.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.981 --rc genhtml_branch_coverage=1 00:25:11.981 --rc genhtml_function_coverage=1 00:25:11.981 --rc genhtml_legend=1 00:25:11.981 --rc geninfo_all_blocks=1 00:25:11.981 --rc geninfo_unexecuted_blocks=1 00:25:11.981 00:25:11.981 ' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:11.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.981 --rc genhtml_branch_coverage=1 00:25:11.981 --rc genhtml_function_coverage=1 00:25:11.981 --rc genhtml_legend=1 00:25:11.981 --rc geninfo_all_blocks=1 00:25:11.981 --rc geninfo_unexecuted_blocks=1 00:25:11.981 00:25:11.981 ' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:11.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.981 --rc genhtml_branch_coverage=1 00:25:11.981 --rc genhtml_function_coverage=1 00:25:11.981 --rc genhtml_legend=1 00:25:11.981 --rc geninfo_all_blocks=1 00:25:11.981 --rc geninfo_unexecuted_blocks=1 00:25:11.981 00:25:11.981 ' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:11.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.981 --rc genhtml_branch_coverage=1 00:25:11.981 --rc genhtml_function_coverage=1 00:25:11.981 --rc genhtml_legend=1 00:25:11.981 --rc geninfo_all_blocks=1 00:25:11.981 --rc geninfo_unexecuted_blocks=1 00:25:11.981 00:25:11.981 ' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.981 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.982 05:46:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:18.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:18.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:18.559 Found net devices under 0000:86:00.0: cvl_0_0 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:18.559 Found net devices under 0000:86:00.1: cvl_0_1 00:25:18.559 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:25:18.560 00:25:18.560 --- 10.0.0.2 ping statistics --- 00:25:18.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.560 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:18.560 00:25:18.560 --- 10.0.0.1 ping statistics --- 00:25:18.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.560 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1873398 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1873398 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1873398 ']' 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:18.560 [2024-11-27 05:47:05.699468] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:18.560 [2024-11-27 05:47:05.699512] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.560 [2024-11-27 05:47:05.778482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:18.560 [2024-11-27 05:47:05.819532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.560 [2024-11-27 05:47:05.819572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.560 [2024-11-27 05:47:05.819579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.560 [2024-11-27 05:47:05.819585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.560 [2024-11-27 05:47:05.819590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.560 [2024-11-27 05:47:05.820845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.560 [2024-11-27 05:47:05.820848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1873398 00:25:18.560 05:47:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:18.560 [2024-11-27 05:47:06.114503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.560 05:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:18.560 Malloc0 00:25:18.560 05:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:18.819 05:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:18.819 05:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.079 [2024-11-27 05:47:06.931104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.079 05:47:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:19.339 [2024-11-27 05:47:07.131607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1873680 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1873680 /var/tmp/bdevperf.sock 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1873680 ']' 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.339 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:19.598 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.598 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:19.598 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:19.598 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:20.166 Nvme0n1 00:25:20.166 05:47:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:20.425 Nvme0n1 00:25:20.425 05:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:20.425 05:47:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:22.962 05:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:22.962 05:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:22.962 05:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:22.962 05:47:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:23.900 05:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:23.900 05:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.900 05:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.900 05:47:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.159 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.159 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:24.159 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.159 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.419 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.419 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.419 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.419 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.678 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.678 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.678 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.678 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.678 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.937 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.937 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.937 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.937 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.937 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.937 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:24.937 05:47:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.196 05:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.196 05:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:25.196 05:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.459 05:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:25.718 05:47:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:26.663 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:26.663 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:26.663 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.663 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.922 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.922 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:26.922 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.922 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.181 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.181 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.181 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.181 05:47:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.181 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.181 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.181 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.181 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.440 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.440 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.440 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.440 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.699 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.699 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.699 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.699 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.958 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.958 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:27.958 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:27.958 05:47:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:28.216 05:47:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:29.594 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:29.594 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:29.594 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.594 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.594 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.594 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.595 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.595 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.595 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.595 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.595 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.595 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.854 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.854 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.854 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.854 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.113 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.113 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.113 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.113 05:47:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.372 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.372 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.372 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.372 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.631 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.631 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:30.631 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.631 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:30.890 05:47:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:31.826 05:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:31.826 05:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:31.826 05:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.826 05:47:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.085 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.085 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:32.085 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.085 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.345 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.345 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.345 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.345 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.604 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.604 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.604 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.604 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.864 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.864 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.864 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.864 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.864 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.864 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:33.123 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.123 05:47:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.123 05:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:33.123 05:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:33.123 05:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:33.382 05:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:33.642 05:47:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:34.578 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:34.578 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.578 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.578 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.837 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.837 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:34.837 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.837 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.096 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.096 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.096 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.096 05:47:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.354 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.613 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.613 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:35.613 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.613 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.871 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.871 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:35.871 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:36.130 05:47:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:36.130 05:47:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.509 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.767 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.767 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.767 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.767 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.026 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.026 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:38.026 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.026 05:47:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.284 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:38.284 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.284 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.284 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.543 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.543 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:38.543 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:38.543 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:38.802 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:39.060 05:47:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:40.006 05:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:40.006 05:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.006 05:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.006 05:47:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.266 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.266 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:40.266 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.266 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.526 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.526 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.526 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.526 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.786 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.786 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.786 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.786 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.786 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.786 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.046 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.046 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.046 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.046 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.046 05:47:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.046 05:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.305 05:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.305 05:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:41.305 05:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.564 05:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:41.824 05:47:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:42.762 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:42.762 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:42.762 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.762 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.022 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.022 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:43.022 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.022 05:47:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.281 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.281 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.281 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.281 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.281 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.281 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.281 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.540 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.540 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.540 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.540 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.540 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.799 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.799 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:43.799 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.799 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.059 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.059 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:44.059 05:47:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:44.335 05:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:44.634 05:47:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.658 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.953 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.953 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.953 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.953 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.226 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.226 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.226 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:46.226 05:47:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.226 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.226 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:46.226 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.226 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:46.486 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.486 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:46.486 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.486 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.747 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.747 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:46.747 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:47.007 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:47.007 05:47:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:48.386 05:47:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:48.386 05:47:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:48.386 05:47:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.387 05:47:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.387 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.387 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:48.387 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.387 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.646 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:48.906 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.906 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:48.906 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.906 05:47:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.166 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.166 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:49.166 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.166 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.425 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1873680 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1873680 ']' 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1873680 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873680 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873680' 00:25:49.426 killing process with pid 1873680 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1873680 00:25:49.426 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1873680 00:25:49.426 { 00:25:49.426 "results": [ 00:25:49.426 { 00:25:49.426 "job": "Nvme0n1", 00:25:49.426 "core_mask": "0x4", 00:25:49.426 "workload": "verify", 00:25:49.426 "status": "terminated", 00:25:49.426 "verify_range": { 00:25:49.426 "start": 0, 00:25:49.426 "length": 16384 00:25:49.426 }, 00:25:49.426 "queue_depth": 128, 00:25:49.426 "io_size": 4096, 00:25:49.426 "runtime": 28.728939, 00:25:49.426 "iops": 10536.518595413496, 00:25:49.426 "mibps": 41.15827576333397, 00:25:49.426 "io_failed": 0, 00:25:49.426 "io_timeout": 0, 00:25:49.426 "avg_latency_us": 12128.703166237283, 00:25:49.426 "min_latency_us": 827.0019047619047, 00:25:49.426 "max_latency_us": 3019898.88 00:25:49.426 } 00:25:49.426 ], 00:25:49.426 "core_count": 1 00:25:49.426 } 00:25:49.689 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1873680 00:25:49.689 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:49.689 [2024-11-27 05:47:07.188710] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:49.689 [2024-11-27 05:47:07.188761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873680 ] 00:25:49.689 [2024-11-27 05:47:07.264434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.689 [2024-11-27 05:47:07.307299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.689 Running I/O for 90 seconds... 00:25:49.689 11372.00 IOPS, 44.42 MiB/s [2024-11-27T04:47:37.693Z] 11292.50 IOPS, 44.11 MiB/s [2024-11-27T04:47:37.693Z] 11365.00 IOPS, 44.39 MiB/s [2024-11-27T04:47:37.693Z] 11403.75 IOPS, 44.55 MiB/s [2024-11-27T04:47:37.693Z] 11403.20 IOPS, 44.54 MiB/s [2024-11-27T04:47:37.693Z] 11412.17 IOPS, 44.58 MiB/s [2024-11-27T04:47:37.693Z] 11381.29 IOPS, 44.46 MiB/s [2024-11-27T04:47:37.693Z] 11369.00 IOPS, 44.41 MiB/s [2024-11-27T04:47:37.693Z] 11362.56 IOPS, 44.38 MiB/s [2024-11-27T04:47:37.693Z] 11360.80 IOPS, 44.38 MiB/s [2024-11-27T04:47:37.693Z] 11350.00 IOPS, 44.34 MiB/s [2024-11-27T04:47:37.693Z] 11344.25 IOPS, 44.31 MiB/s [2024-11-27T04:47:37.693Z] [2024-11-27 05:47:21.257611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.689 [2024-11-27 05:47:21.257650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.257985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.257993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.258011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.258031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.258049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.258067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.258085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.258105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.689 [2024-11-27 05:47:21.258124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.689 [2024-11-27 05:47:21.258137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.258980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.258994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.690 [2024-11-27 05:47:21.259225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:49.690 [2024-11-27 05:47:21.259238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.259496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.259503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.691 [2024-11-27 05:47:21.260680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:49.691 [2024-11-27 05:47:21.260695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.260982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.260989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.692 [2024-11-27 05:47:21.261424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.692 [2024-11-27 05:47:21.261431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:21.261761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:21.261769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.693 11093.54 IOPS, 43.33 MiB/s [2024-11-27T04:47:37.697Z] 10301.14 IOPS, 40.24 MiB/s [2024-11-27T04:47:37.697Z] 9614.40 IOPS, 37.56 MiB/s [2024-11-27T04:47:37.697Z] 9216.19 IOPS, 36.00 MiB/s [2024-11-27T04:47:37.697Z] 9333.18 IOPS, 36.46 MiB/s [2024-11-27T04:47:37.697Z] 9443.78 IOPS, 36.89 MiB/s [2024-11-27T04:47:37.697Z] 9632.89 IOPS, 37.63 MiB/s [2024-11-27T04:47:37.697Z] 9828.65 IOPS, 38.39 MiB/s [2024-11-27T04:47:37.697Z] 9993.62 IOPS, 39.04 MiB/s [2024-11-27T04:47:37.697Z] 10051.09 IOPS, 39.26 MiB/s [2024-11-27T04:47:37.697Z] 10103.83 IOPS, 39.47 MiB/s [2024-11-27T04:47:37.697Z] 10176.38 IOPS, 39.75 MiB/s [2024-11-27T04:47:37.697Z] 10314.80 IOPS, 40.29 MiB/s [2024-11-27T04:47:37.697Z] 10430.27 IOPS, 40.74 MiB/s [2024-11-27T04:47:37.697Z] [2024-11-27 05:47:34.946336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.693 [2024-11-27 05:47:34.946629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.693 [2024-11-27 05:47:34.946648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:49.693 [2024-11-27 05:47:34.946661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.946822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.946829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.694 [2024-11-27 05:47:34.947278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.694 [2024-11-27 05:47:34.947296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.694 [2024-11-27 05:47:34.947315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.694 [2024-11-27 05:47:34.947353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.694 [2024-11-27 05:47:34.947365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.947410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.947430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.947449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.947467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.947650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.695 [2024-11-27 05:47:34.947656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.948576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.948594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.948608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.948616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.948628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.948636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.948655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.948668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.948681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:49.695 [2024-11-27 05:47:34.948693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.695 [2024-11-27 05:47:34.948700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.695 10487.78 IOPS, 40.97 MiB/s [2024-11-27T04:47:37.699Z] 10520.43 IOPS, 41.10 MiB/s [2024-11-27T04:47:37.699Z] Received shutdown signal, test time was about 28.729581 seconds 00:25:49.695 00:25:49.695 Latency(us) 00:25:49.695 [2024-11-27T04:47:37.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.695 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:49.695 Verification LBA range: start 0x0 length 0x4000 00:25:49.695 Nvme0n1 : 28.73 10536.52 41.16 0.00 0.00 12128.70 827.00 3019898.88 00:25:49.695 [2024-11-27T04:47:37.699Z] =================================================================================================================== 00:25:49.695 [2024-11-27T04:47:37.699Z] Total : 10536.52 41.16 0.00 0.00 12128.70 827.00 3019898.88 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.695 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.695 rmmod nvme_tcp 00:25:49.955 rmmod nvme_fabrics 00:25:49.955 rmmod nvme_keyring 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1873398 ']' 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1873398 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1873398 ']' 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1873398 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1873398 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1873398' 00:25:49.955 killing process with pid 1873398 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1873398 00:25:49.955 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1873398 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.215 05:47:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.125 00:25:52.125 real 0m40.520s 00:25:52.125 user 1m49.933s 00:25:52.125 sys 0m11.307s 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.125 ************************************ 00:25:52.125 END TEST nvmf_host_multipath_status 00:25:52.125 ************************************ 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.125 ************************************ 00:25:52.125 START TEST nvmf_discovery_remove_ifc 00:25:52.125 ************************************ 00:25:52.125 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:52.386 * Looking for test storage... 00:25:52.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.386 --rc genhtml_branch_coverage=1 00:25:52.386 --rc genhtml_function_coverage=1 00:25:52.386 --rc genhtml_legend=1 00:25:52.386 --rc geninfo_all_blocks=1 00:25:52.386 --rc geninfo_unexecuted_blocks=1 00:25:52.386 00:25:52.386 ' 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.386 --rc genhtml_branch_coverage=1 00:25:52.386 --rc genhtml_function_coverage=1 00:25:52.386 --rc genhtml_legend=1 00:25:52.386 --rc geninfo_all_blocks=1 00:25:52.386 --rc geninfo_unexecuted_blocks=1 00:25:52.386 00:25:52.386 ' 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.386 --rc genhtml_branch_coverage=1 00:25:52.386 --rc genhtml_function_coverage=1 00:25:52.386 --rc genhtml_legend=1 00:25:52.386 --rc geninfo_all_blocks=1 00:25:52.386 --rc geninfo_unexecuted_blocks=1 00:25:52.386 00:25:52.386 ' 00:25:52.386 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:52.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.387 --rc genhtml_branch_coverage=1 00:25:52.387 --rc genhtml_function_coverage=1 00:25:52.387 --rc genhtml_legend=1 00:25:52.387 --rc geninfo_all_blocks=1 00:25:52.387 --rc geninfo_unexecuted_blocks=1 00:25:52.387 00:25:52.387 ' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:52.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:52.387 05:47:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:58.964 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:58.964 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:58.964 Found net devices under 0000:86:00.0: cvl_0_0 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.964 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:58.964 Found net devices under 0000:86:00.1: cvl_0_1 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.965 05:47:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:25:58.965 00:25:58.965 --- 10.0.0.2 ping statistics --- 00:25:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.965 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:25:58.965 00:25:58.965 --- 10.0.0.1 ping statistics --- 00:25:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.965 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1882245 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1882245 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1882245 ']' 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.965 [2024-11-27 05:47:46.306509] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:58.965 [2024-11-27 05:47:46.306556] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.965 [2024-11-27 05:47:46.384778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.965 [2024-11-27 05:47:46.424734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.965 [2024-11-27 05:47:46.424768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.965 [2024-11-27 05:47:46.424775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.965 [2024-11-27 05:47:46.424781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.965 [2024-11-27 05:47:46.424786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.965 [2024-11-27 05:47:46.425327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.965 [2024-11-27 05:47:46.568036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.965 [2024-11-27 05:47:46.576204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:58.965 null0 00:25:58.965 [2024-11-27 05:47:46.608190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1882276 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1882276 /tmp/host.sock 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1882276 ']' 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:58.965 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.965 [2024-11-27 05:47:46.676847] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:25:58.965 [2024-11-27 05:47:46.676890] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882276 ] 00:25:58.965 [2024-11-27 05:47:46.750080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.965 [2024-11-27 05:47:46.792592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:58.965 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.966 05:47:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.345 [2024-11-27 05:47:47.924336] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:00.345 [2024-11-27 05:47:47.924356] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:00.345 [2024-11-27 05:47:47.924371] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:00.345 [2024-11-27 05:47:48.010632] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:00.345 [2024-11-27 05:47:48.065208] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:00.345 [2024-11-27 05:47:48.065932] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x227fa50:1 started. 00:26:00.345 [2024-11-27 05:47:48.067125] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:00.345 [2024-11-27 05:47:48.067164] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:00.345 [2024-11-27 05:47:48.067184] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:00.345 [2024-11-27 05:47:48.067196] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:00.345 [2024-11-27 05:47:48.067214] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:00.345 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.345 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:00.345 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.345 [2024-11-27 05:47:48.073245] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x227fa50 was disconnected and freed. delete nvme_qpair. 00:26:00.345 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.345 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.345 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.345 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.346 05:47:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:01.284 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.285 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.285 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.285 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.285 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.285 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.285 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.543 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.543 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.543 05:47:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:02.481 05:47:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.419 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.678 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:03.678 05:47:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:04.617 05:47:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.556 [2024-11-27 05:47:53.508880] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:05.556 [2024-11-27 05:47:53.508921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.556 [2024-11-27 05:47:53.508931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.556 [2024-11-27 05:47:53.508940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.556 [2024-11-27 05:47:53.508948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.556 [2024-11-27 05:47:53.508955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.556 [2024-11-27 05:47:53.508963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.556 [2024-11-27 05:47:53.508970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.556 [2024-11-27 05:47:53.508977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.556 [2024-11-27 05:47:53.508985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.556 [2024-11-27 05:47:53.508991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.556 [2024-11-27 05:47:53.509003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225c240 is same with the state(6) to be set 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.556 [2024-11-27 05:47:53.518902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225c240 (9): Bad file descriptor 00:26:05.556 [2024-11-27 05:47:53.528937] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:05.556 [2024-11-27 05:47:53.528949] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:05.556 [2024-11-27 05:47:53.528954] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:05.556 [2024-11-27 05:47:53.528959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:05.556 [2024-11-27 05:47:53.528981] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:05.556 05:47:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.936 [2024-11-27 05:47:54.580702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:06.936 [2024-11-27 05:47:54.580779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x225c240 with addr=10.0.0.2, port=4420 00:26:06.936 [2024-11-27 05:47:54.580812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225c240 is same with the state(6) to be set 00:26:06.936 [2024-11-27 05:47:54.580863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x225c240 (9): Bad file descriptor 00:26:06.936 [2024-11-27 05:47:54.581804] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:06.936 [2024-11-27 05:47:54.581869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:06.936 [2024-11-27 05:47:54.581893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:06.936 [2024-11-27 05:47:54.581916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:06.936 [2024-11-27 05:47:54.581937] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:06.936 [2024-11-27 05:47:54.581954] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:06.936 [2024-11-27 05:47:54.581967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:06.936 [2024-11-27 05:47:54.581989] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:06.936 [2024-11-27 05:47:54.582005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.936 05:47:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.898 [2024-11-27 05:47:55.584519] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:07.898 [2024-11-27 05:47:55.584538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:07.898 [2024-11-27 05:47:55.584549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:07.898 [2024-11-27 05:47:55.584556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:07.898 [2024-11-27 05:47:55.584563] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:07.898 [2024-11-27 05:47:55.584569] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:07.898 [2024-11-27 05:47:55.584574] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:07.898 [2024-11-27 05:47:55.584578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:07.898 [2024-11-27 05:47:55.584598] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:07.898 [2024-11-27 05:47:55.584616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.898 [2024-11-27 05:47:55.584624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.898 [2024-11-27 05:47:55.584633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.898 [2024-11-27 05:47:55.584640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.898 [2024-11-27 05:47:55.584647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.898 [2024-11-27 05:47:55.584654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.898 [2024-11-27 05:47:55.584662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.898 [2024-11-27 05:47:55.584672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.898 [2024-11-27 05:47:55.584680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:07.898 [2024-11-27 05:47:55.584686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:07.898 [2024-11-27 05:47:55.584694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:07.898 [2024-11-27 05:47:55.585142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224b910 (9): Bad file descriptor 00:26:07.898 [2024-11-27 05:47:55.586151] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:07.898 [2024-11-27 05:47:55.586164] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.898 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:07.899 05:47:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:08.833 05:47:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.768 [2024-11-27 05:47:57.599131] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:09.768 [2024-11-27 05:47:57.599147] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:09.768 [2024-11-27 05:47:57.599158] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:09.768 [2024-11-27 05:47:57.687429] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:09.768 [2024-11-27 05:47:57.748040] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:09.768 [2024-11-27 05:47:57.748580] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x22894a0:1 started. 00:26:09.768 [2024-11-27 05:47:57.749595] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:09.768 [2024-11-27 05:47:57.749630] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:09.768 [2024-11-27 05:47:57.749647] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:09.768 [2024-11-27 05:47:57.749660] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:09.768 [2024-11-27 05:47:57.749667] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:09.768 [2024-11-27 05:47:57.758139] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x22894a0 was disconnected and freed. delete nvme_qpair. 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1882276 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1882276 ']' 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1882276 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:10.026 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.027 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882276 00:26:10.027 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.027 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.027 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882276' 00:26:10.027 killing process with pid 1882276 00:26:10.027 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1882276 00:26:10.027 05:47:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1882276 00:26:10.284 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:10.284 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.284 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:10.284 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.284 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:10.284 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.284 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.284 rmmod nvme_tcp 00:26:10.284 rmmod nvme_fabrics 00:26:10.285 rmmod nvme_keyring 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1882245 ']' 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1882245 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1882245 ']' 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1882245 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1882245 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1882245' 00:26:10.285 killing process with pid 1882245 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1882245 00:26:10.285 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1882245 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.543 05:47:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.450 05:48:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:12.450 00:26:12.450 real 0m20.331s 00:26:12.450 user 0m24.418s 00:26:12.450 sys 0m5.813s 00:26:12.450 05:48:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.450 05:48:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.450 ************************************ 00:26:12.450 END TEST nvmf_discovery_remove_ifc 00:26:12.450 ************************************ 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.710 ************************************ 00:26:12.710 START TEST nvmf_identify_kernel_target 00:26:12.710 ************************************ 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:12.710 * Looking for test storage... 00:26:12.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.710 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.711 --rc genhtml_branch_coverage=1 00:26:12.711 --rc genhtml_function_coverage=1 00:26:12.711 --rc genhtml_legend=1 00:26:12.711 --rc geninfo_all_blocks=1 00:26:12.711 --rc geninfo_unexecuted_blocks=1 00:26:12.711 00:26:12.711 ' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.711 --rc genhtml_branch_coverage=1 00:26:12.711 --rc genhtml_function_coverage=1 00:26:12.711 --rc genhtml_legend=1 00:26:12.711 --rc geninfo_all_blocks=1 00:26:12.711 --rc geninfo_unexecuted_blocks=1 00:26:12.711 00:26:12.711 ' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.711 --rc genhtml_branch_coverage=1 00:26:12.711 --rc genhtml_function_coverage=1 00:26:12.711 --rc genhtml_legend=1 00:26:12.711 --rc geninfo_all_blocks=1 00:26:12.711 --rc geninfo_unexecuted_blocks=1 00:26:12.711 00:26:12.711 ' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.711 --rc genhtml_branch_coverage=1 00:26:12.711 --rc genhtml_function_coverage=1 00:26:12.711 --rc genhtml_legend=1 00:26:12.711 --rc geninfo_all_blocks=1 00:26:12.711 --rc geninfo_unexecuted_blocks=1 00:26:12.711 00:26:12.711 ' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.711 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.971 05:48:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:19.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:19.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:19.545 Found net devices under 0000:86:00.0: cvl_0_0 00:26:19.545 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:19.546 Found net devices under 0000:86:00.1: cvl_0_1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:19.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:26:19.546 00:26:19.546 --- 10.0.0.2 ping statistics --- 00:26:19.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.546 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:26:19.546 00:26:19.546 --- 10.0.0.1 ping statistics --- 00:26:19.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.546 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:19.546 05:48:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:21.449 Waiting for block devices as requested 00:26:21.449 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:21.708 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:21.708 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:21.708 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:21.708 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:21.968 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:21.968 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:21.968 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:22.228 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:22.228 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:22.228 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:22.487 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:22.487 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:22.487 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:22.487 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:22.747 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:22.747 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:22.747 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:23.007 No valid GPT data, bailing 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:23.007 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:23.007 00:26:23.007 Discovery Log Number of Records 2, Generation counter 2 00:26:23.007 =====Discovery Log Entry 0====== 00:26:23.007 trtype: tcp 00:26:23.007 adrfam: ipv4 00:26:23.008 subtype: current discovery subsystem 00:26:23.008 treq: not specified, sq flow control disable supported 00:26:23.008 portid: 1 00:26:23.008 trsvcid: 4420 00:26:23.008 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:23.008 traddr: 10.0.0.1 00:26:23.008 eflags: none 00:26:23.008 sectype: none 00:26:23.008 =====Discovery Log Entry 1====== 00:26:23.008 trtype: tcp 00:26:23.008 adrfam: ipv4 00:26:23.008 subtype: nvme subsystem 00:26:23.008 treq: not specified, sq flow control disable supported 00:26:23.008 portid: 1 00:26:23.008 trsvcid: 4420 00:26:23.008 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:23.008 traddr: 10.0.0.1 00:26:23.008 eflags: none 00:26:23.008 sectype: none 00:26:23.008 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:23.008 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:23.008 ===================================================== 00:26:23.008 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:23.008 ===================================================== 00:26:23.008 Controller Capabilities/Features 00:26:23.008 ================================ 00:26:23.008 Vendor ID: 0000 00:26:23.008 Subsystem Vendor ID: 0000 00:26:23.008 Serial Number: 79b63973ba48e43cb9c8 00:26:23.008 Model Number: Linux 00:26:23.008 Firmware Version: 6.8.9-20 00:26:23.008 Recommended Arb Burst: 0 00:26:23.008 IEEE OUI Identifier: 00 00 00 00:26:23.008 Multi-path I/O 00:26:23.008 May have multiple subsystem ports: No 00:26:23.008 May have multiple controllers: No 00:26:23.008 Associated with SR-IOV VF: No 00:26:23.008 Max Data Transfer Size: Unlimited 00:26:23.008 Max Number of Namespaces: 0 00:26:23.008 Max Number of I/O Queues: 1024 00:26:23.008 NVMe Specification Version (VS): 1.3 00:26:23.008 NVMe Specification Version (Identify): 1.3 00:26:23.008 Maximum Queue Entries: 1024 00:26:23.008 Contiguous Queues Required: No 00:26:23.008 Arbitration Mechanisms Supported 00:26:23.008 Weighted Round Robin: Not Supported 00:26:23.008 Vendor Specific: Not Supported 00:26:23.008 Reset Timeout: 7500 ms 00:26:23.008 Doorbell Stride: 4 bytes 00:26:23.008 NVM Subsystem Reset: Not Supported 00:26:23.008 Command Sets Supported 00:26:23.008 NVM Command Set: Supported 00:26:23.008 Boot Partition: Not Supported 00:26:23.008 Memory Page Size Minimum: 4096 bytes 00:26:23.008 Memory Page Size Maximum: 4096 bytes 00:26:23.008 Persistent Memory Region: Not Supported 00:26:23.008 Optional Asynchronous Events Supported 00:26:23.008 Namespace Attribute Notices: Not Supported 00:26:23.008 Firmware Activation Notices: Not Supported 00:26:23.008 ANA Change Notices: Not Supported 00:26:23.008 PLE Aggregate Log Change Notices: Not Supported 00:26:23.008 LBA Status Info Alert Notices: Not Supported 00:26:23.008 EGE Aggregate Log Change Notices: Not Supported 00:26:23.008 Normal NVM Subsystem Shutdown event: Not Supported 00:26:23.008 Zone Descriptor Change Notices: Not Supported 00:26:23.008 Discovery Log Change Notices: Supported 00:26:23.008 Controller Attributes 00:26:23.008 128-bit Host Identifier: Not Supported 00:26:23.008 Non-Operational Permissive Mode: Not Supported 00:26:23.008 NVM Sets: Not Supported 00:26:23.008 Read Recovery Levels: Not Supported 00:26:23.008 Endurance Groups: Not Supported 00:26:23.008 Predictable Latency Mode: Not Supported 00:26:23.008 Traffic Based Keep ALive: Not Supported 00:26:23.008 Namespace Granularity: Not Supported 00:26:23.008 SQ Associations: Not Supported 00:26:23.008 UUID List: Not Supported 00:26:23.008 Multi-Domain Subsystem: Not Supported 00:26:23.008 Fixed Capacity Management: Not Supported 00:26:23.008 Variable Capacity Management: Not Supported 00:26:23.008 Delete Endurance Group: Not Supported 00:26:23.008 Delete NVM Set: Not Supported 00:26:23.008 Extended LBA Formats Supported: Not Supported 00:26:23.008 Flexible Data Placement Supported: Not Supported 00:26:23.008 00:26:23.008 Controller Memory Buffer Support 00:26:23.008 ================================ 00:26:23.008 Supported: No 00:26:23.008 00:26:23.008 Persistent Memory Region Support 00:26:23.008 ================================ 00:26:23.008 Supported: No 00:26:23.008 00:26:23.008 Admin Command Set Attributes 00:26:23.008 ============================ 00:26:23.008 Security Send/Receive: Not Supported 00:26:23.008 Format NVM: Not Supported 00:26:23.008 Firmware Activate/Download: Not Supported 00:26:23.008 Namespace Management: Not Supported 00:26:23.008 Device Self-Test: Not Supported 00:26:23.008 Directives: Not Supported 00:26:23.008 NVMe-MI: Not Supported 00:26:23.008 Virtualization Management: Not Supported 00:26:23.008 Doorbell Buffer Config: Not Supported 00:26:23.008 Get LBA Status Capability: Not Supported 00:26:23.008 Command & Feature Lockdown Capability: Not Supported 00:26:23.008 Abort Command Limit: 1 00:26:23.008 Async Event Request Limit: 1 00:26:23.008 Number of Firmware Slots: N/A 00:26:23.008 Firmware Slot 1 Read-Only: N/A 00:26:23.008 Firmware Activation Without Reset: N/A 00:26:23.008 Multiple Update Detection Support: N/A 00:26:23.008 Firmware Update Granularity: No Information Provided 00:26:23.008 Per-Namespace SMART Log: No 00:26:23.008 Asymmetric Namespace Access Log Page: Not Supported 00:26:23.008 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:23.008 Command Effects Log Page: Not Supported 00:26:23.008 Get Log Page Extended Data: Supported 00:26:23.008 Telemetry Log Pages: Not Supported 00:26:23.008 Persistent Event Log Pages: Not Supported 00:26:23.008 Supported Log Pages Log Page: May Support 00:26:23.008 Commands Supported & Effects Log Page: Not Supported 00:26:23.008 Feature Identifiers & Effects Log Page:May Support 00:26:23.008 NVMe-MI Commands & Effects Log Page: May Support 00:26:23.008 Data Area 4 for Telemetry Log: Not Supported 00:26:23.008 Error Log Page Entries Supported: 1 00:26:23.008 Keep Alive: Not Supported 00:26:23.008 00:26:23.008 NVM Command Set Attributes 00:26:23.008 ========================== 00:26:23.008 Submission Queue Entry Size 00:26:23.008 Max: 1 00:26:23.008 Min: 1 00:26:23.008 Completion Queue Entry Size 00:26:23.008 Max: 1 00:26:23.008 Min: 1 00:26:23.008 Number of Namespaces: 0 00:26:23.008 Compare Command: Not Supported 00:26:23.008 Write Uncorrectable Command: Not Supported 00:26:23.008 Dataset Management Command: Not Supported 00:26:23.008 Write Zeroes Command: Not Supported 00:26:23.008 Set Features Save Field: Not Supported 00:26:23.008 Reservations: Not Supported 00:26:23.008 Timestamp: Not Supported 00:26:23.008 Copy: Not Supported 00:26:23.008 Volatile Write Cache: Not Present 00:26:23.008 Atomic Write Unit (Normal): 1 00:26:23.008 Atomic Write Unit (PFail): 1 00:26:23.008 Atomic Compare & Write Unit: 1 00:26:23.008 Fused Compare & Write: Not Supported 00:26:23.008 Scatter-Gather List 00:26:23.008 SGL Command Set: Supported 00:26:23.008 SGL Keyed: Not Supported 00:26:23.008 SGL Bit Bucket Descriptor: Not Supported 00:26:23.008 SGL Metadata Pointer: Not Supported 00:26:23.008 Oversized SGL: Not Supported 00:26:23.008 SGL Metadata Address: Not Supported 00:26:23.008 SGL Offset: Supported 00:26:23.008 Transport SGL Data Block: Not Supported 00:26:23.008 Replay Protected Memory Block: Not Supported 00:26:23.008 00:26:23.008 Firmware Slot Information 00:26:23.008 ========================= 00:26:23.008 Active slot: 0 00:26:23.008 00:26:23.008 00:26:23.008 Error Log 00:26:23.008 ========= 00:26:23.008 00:26:23.008 Active Namespaces 00:26:23.008 ================= 00:26:23.008 Discovery Log Page 00:26:23.008 ================== 00:26:23.008 Generation Counter: 2 00:26:23.008 Number of Records: 2 00:26:23.008 Record Format: 0 00:26:23.008 00:26:23.008 Discovery Log Entry 0 00:26:23.008 ---------------------- 00:26:23.008 Transport Type: 3 (TCP) 00:26:23.008 Address Family: 1 (IPv4) 00:26:23.008 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:23.008 Entry Flags: 00:26:23.008 Duplicate Returned Information: 0 00:26:23.008 Explicit Persistent Connection Support for Discovery: 0 00:26:23.008 Transport Requirements: 00:26:23.008 Secure Channel: Not Specified 00:26:23.008 Port ID: 1 (0x0001) 00:26:23.008 Controller ID: 65535 (0xffff) 00:26:23.008 Admin Max SQ Size: 32 00:26:23.008 Transport Service Identifier: 4420 00:26:23.008 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:23.008 Transport Address: 10.0.0.1 00:26:23.008 Discovery Log Entry 1 00:26:23.008 ---------------------- 00:26:23.008 Transport Type: 3 (TCP) 00:26:23.008 Address Family: 1 (IPv4) 00:26:23.008 Subsystem Type: 2 (NVM Subsystem) 00:26:23.008 Entry Flags: 00:26:23.008 Duplicate Returned Information: 0 00:26:23.009 Explicit Persistent Connection Support for Discovery: 0 00:26:23.009 Transport Requirements: 00:26:23.009 Secure Channel: Not Specified 00:26:23.009 Port ID: 1 (0x0001) 00:26:23.009 Controller ID: 65535 (0xffff) 00:26:23.009 Admin Max SQ Size: 32 00:26:23.009 Transport Service Identifier: 4420 00:26:23.009 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:23.009 Transport Address: 10.0.0.1 00:26:23.009 05:48:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:23.270 get_feature(0x01) failed 00:26:23.270 get_feature(0x02) failed 00:26:23.270 get_feature(0x04) failed 00:26:23.270 ===================================================== 00:26:23.270 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:23.270 ===================================================== 00:26:23.270 Controller Capabilities/Features 00:26:23.270 ================================ 00:26:23.270 Vendor ID: 0000 00:26:23.270 Subsystem Vendor ID: 0000 00:26:23.270 Serial Number: 314b03f0f42d0e91886f 00:26:23.270 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:23.270 Firmware Version: 6.8.9-20 00:26:23.270 Recommended Arb Burst: 6 00:26:23.270 IEEE OUI Identifier: 00 00 00 00:26:23.270 Multi-path I/O 00:26:23.270 May have multiple subsystem ports: Yes 00:26:23.270 May have multiple controllers: Yes 00:26:23.270 Associated with SR-IOV VF: No 00:26:23.270 Max Data Transfer Size: Unlimited 00:26:23.270 Max Number of Namespaces: 1024 00:26:23.270 Max Number of I/O Queues: 128 00:26:23.270 NVMe Specification Version (VS): 1.3 00:26:23.270 NVMe Specification Version (Identify): 1.3 00:26:23.270 Maximum Queue Entries: 1024 00:26:23.270 Contiguous Queues Required: No 00:26:23.270 Arbitration Mechanisms Supported 00:26:23.270 Weighted Round Robin: Not Supported 00:26:23.270 Vendor Specific: Not Supported 00:26:23.270 Reset Timeout: 7500 ms 00:26:23.270 Doorbell Stride: 4 bytes 00:26:23.270 NVM Subsystem Reset: Not Supported 00:26:23.270 Command Sets Supported 00:26:23.270 NVM Command Set: Supported 00:26:23.270 Boot Partition: Not Supported 00:26:23.270 Memory Page Size Minimum: 4096 bytes 00:26:23.270 Memory Page Size Maximum: 4096 bytes 00:26:23.270 Persistent Memory Region: Not Supported 00:26:23.270 Optional Asynchronous Events Supported 00:26:23.270 Namespace Attribute Notices: Supported 00:26:23.270 Firmware Activation Notices: Not Supported 00:26:23.270 ANA Change Notices: Supported 00:26:23.270 PLE Aggregate Log Change Notices: Not Supported 00:26:23.270 LBA Status Info Alert Notices: Not Supported 00:26:23.270 EGE Aggregate Log Change Notices: Not Supported 00:26:23.270 Normal NVM Subsystem Shutdown event: Not Supported 00:26:23.270 Zone Descriptor Change Notices: Not Supported 00:26:23.270 Discovery Log Change Notices: Not Supported 00:26:23.270 Controller Attributes 00:26:23.270 128-bit Host Identifier: Supported 00:26:23.270 Non-Operational Permissive Mode: Not Supported 00:26:23.270 NVM Sets: Not Supported 00:26:23.270 Read Recovery Levels: Not Supported 00:26:23.270 Endurance Groups: Not Supported 00:26:23.270 Predictable Latency Mode: Not Supported 00:26:23.270 Traffic Based Keep ALive: Supported 00:26:23.270 Namespace Granularity: Not Supported 00:26:23.270 SQ Associations: Not Supported 00:26:23.270 UUID List: Not Supported 00:26:23.270 Multi-Domain Subsystem: Not Supported 00:26:23.270 Fixed Capacity Management: Not Supported 00:26:23.270 Variable Capacity Management: Not Supported 00:26:23.270 Delete Endurance Group: Not Supported 00:26:23.270 Delete NVM Set: Not Supported 00:26:23.270 Extended LBA Formats Supported: Not Supported 00:26:23.270 Flexible Data Placement Supported: Not Supported 00:26:23.270 00:26:23.270 Controller Memory Buffer Support 00:26:23.270 ================================ 00:26:23.270 Supported: No 00:26:23.270 00:26:23.270 Persistent Memory Region Support 00:26:23.270 ================================ 00:26:23.270 Supported: No 00:26:23.270 00:26:23.270 Admin Command Set Attributes 00:26:23.270 ============================ 00:26:23.270 Security Send/Receive: Not Supported 00:26:23.270 Format NVM: Not Supported 00:26:23.270 Firmware Activate/Download: Not Supported 00:26:23.270 Namespace Management: Not Supported 00:26:23.270 Device Self-Test: Not Supported 00:26:23.270 Directives: Not Supported 00:26:23.270 NVMe-MI: Not Supported 00:26:23.270 Virtualization Management: Not Supported 00:26:23.270 Doorbell Buffer Config: Not Supported 00:26:23.270 Get LBA Status Capability: Not Supported 00:26:23.270 Command & Feature Lockdown Capability: Not Supported 00:26:23.270 Abort Command Limit: 4 00:26:23.270 Async Event Request Limit: 4 00:26:23.270 Number of Firmware Slots: N/A 00:26:23.270 Firmware Slot 1 Read-Only: N/A 00:26:23.270 Firmware Activation Without Reset: N/A 00:26:23.270 Multiple Update Detection Support: N/A 00:26:23.270 Firmware Update Granularity: No Information Provided 00:26:23.270 Per-Namespace SMART Log: Yes 00:26:23.270 Asymmetric Namespace Access Log Page: Supported 00:26:23.270 ANA Transition Time : 10 sec 00:26:23.270 00:26:23.270 Asymmetric Namespace Access Capabilities 00:26:23.270 ANA Optimized State : Supported 00:26:23.270 ANA Non-Optimized State : Supported 00:26:23.270 ANA Inaccessible State : Supported 00:26:23.270 ANA Persistent Loss State : Supported 00:26:23.270 ANA Change State : Supported 00:26:23.270 ANAGRPID is not changed : No 00:26:23.270 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:23.270 00:26:23.270 ANA Group Identifier Maximum : 128 00:26:23.270 Number of ANA Group Identifiers : 128 00:26:23.271 Max Number of Allowed Namespaces : 1024 00:26:23.271 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:23.271 Command Effects Log Page: Supported 00:26:23.271 Get Log Page Extended Data: Supported 00:26:23.271 Telemetry Log Pages: Not Supported 00:26:23.271 Persistent Event Log Pages: Not Supported 00:26:23.271 Supported Log Pages Log Page: May Support 00:26:23.271 Commands Supported & Effects Log Page: Not Supported 00:26:23.271 Feature Identifiers & Effects Log Page:May Support 00:26:23.271 NVMe-MI Commands & Effects Log Page: May Support 00:26:23.271 Data Area 4 for Telemetry Log: Not Supported 00:26:23.271 Error Log Page Entries Supported: 128 00:26:23.271 Keep Alive: Supported 00:26:23.271 Keep Alive Granularity: 1000 ms 00:26:23.271 00:26:23.271 NVM Command Set Attributes 00:26:23.271 ========================== 00:26:23.271 Submission Queue Entry Size 00:26:23.271 Max: 64 00:26:23.271 Min: 64 00:26:23.271 Completion Queue Entry Size 00:26:23.271 Max: 16 00:26:23.271 Min: 16 00:26:23.271 Number of Namespaces: 1024 00:26:23.271 Compare Command: Not Supported 00:26:23.271 Write Uncorrectable Command: Not Supported 00:26:23.271 Dataset Management Command: Supported 00:26:23.271 Write Zeroes Command: Supported 00:26:23.271 Set Features Save Field: Not Supported 00:26:23.271 Reservations: Not Supported 00:26:23.271 Timestamp: Not Supported 00:26:23.271 Copy: Not Supported 00:26:23.271 Volatile Write Cache: Present 00:26:23.271 Atomic Write Unit (Normal): 1 00:26:23.271 Atomic Write Unit (PFail): 1 00:26:23.271 Atomic Compare & Write Unit: 1 00:26:23.271 Fused Compare & Write: Not Supported 00:26:23.271 Scatter-Gather List 00:26:23.271 SGL Command Set: Supported 00:26:23.271 SGL Keyed: Not Supported 00:26:23.271 SGL Bit Bucket Descriptor: Not Supported 00:26:23.271 SGL Metadata Pointer: Not Supported 00:26:23.271 Oversized SGL: Not Supported 00:26:23.271 SGL Metadata Address: Not Supported 00:26:23.271 SGL Offset: Supported 00:26:23.271 Transport SGL Data Block: Not Supported 00:26:23.271 Replay Protected Memory Block: Not Supported 00:26:23.271 00:26:23.271 Firmware Slot Information 00:26:23.271 ========================= 00:26:23.271 Active slot: 0 00:26:23.271 00:26:23.271 Asymmetric Namespace Access 00:26:23.271 =========================== 00:26:23.271 Change Count : 0 00:26:23.271 Number of ANA Group Descriptors : 1 00:26:23.271 ANA Group Descriptor : 0 00:26:23.271 ANA Group ID : 1 00:26:23.271 Number of NSID Values : 1 00:26:23.271 Change Count : 0 00:26:23.271 ANA State : 1 00:26:23.271 Namespace Identifier : 1 00:26:23.271 00:26:23.271 Commands Supported and Effects 00:26:23.271 ============================== 00:26:23.271 Admin Commands 00:26:23.271 -------------- 00:26:23.271 Get Log Page (02h): Supported 00:26:23.271 Identify (06h): Supported 00:26:23.271 Abort (08h): Supported 00:26:23.271 Set Features (09h): Supported 00:26:23.271 Get Features (0Ah): Supported 00:26:23.271 Asynchronous Event Request (0Ch): Supported 00:26:23.271 Keep Alive (18h): Supported 00:26:23.271 I/O Commands 00:26:23.271 ------------ 00:26:23.271 Flush (00h): Supported 00:26:23.271 Write (01h): Supported LBA-Change 00:26:23.271 Read (02h): Supported 00:26:23.271 Write Zeroes (08h): Supported LBA-Change 00:26:23.271 Dataset Management (09h): Supported 00:26:23.271 00:26:23.271 Error Log 00:26:23.271 ========= 00:26:23.271 Entry: 0 00:26:23.271 Error Count: 0x3 00:26:23.271 Submission Queue Id: 0x0 00:26:23.271 Command Id: 0x5 00:26:23.271 Phase Bit: 0 00:26:23.271 Status Code: 0x2 00:26:23.271 Status Code Type: 0x0 00:26:23.271 Do Not Retry: 1 00:26:23.271 Error Location: 0x28 00:26:23.271 LBA: 0x0 00:26:23.271 Namespace: 0x0 00:26:23.271 Vendor Log Page: 0x0 00:26:23.271 ----------- 00:26:23.271 Entry: 1 00:26:23.271 Error Count: 0x2 00:26:23.271 Submission Queue Id: 0x0 00:26:23.271 Command Id: 0x5 00:26:23.271 Phase Bit: 0 00:26:23.271 Status Code: 0x2 00:26:23.271 Status Code Type: 0x0 00:26:23.271 Do Not Retry: 1 00:26:23.271 Error Location: 0x28 00:26:23.271 LBA: 0x0 00:26:23.271 Namespace: 0x0 00:26:23.271 Vendor Log Page: 0x0 00:26:23.271 ----------- 00:26:23.271 Entry: 2 00:26:23.271 Error Count: 0x1 00:26:23.271 Submission Queue Id: 0x0 00:26:23.271 Command Id: 0x4 00:26:23.271 Phase Bit: 0 00:26:23.271 Status Code: 0x2 00:26:23.271 Status Code Type: 0x0 00:26:23.271 Do Not Retry: 1 00:26:23.271 Error Location: 0x28 00:26:23.271 LBA: 0x0 00:26:23.271 Namespace: 0x0 00:26:23.271 Vendor Log Page: 0x0 00:26:23.271 00:26:23.271 Number of Queues 00:26:23.271 ================ 00:26:23.271 Number of I/O Submission Queues: 128 00:26:23.271 Number of I/O Completion Queues: 128 00:26:23.271 00:26:23.271 ZNS Specific Controller Data 00:26:23.271 ============================ 00:26:23.271 Zone Append Size Limit: 0 00:26:23.271 00:26:23.271 00:26:23.271 Active Namespaces 00:26:23.271 ================= 00:26:23.271 get_feature(0x05) failed 00:26:23.271 Namespace ID:1 00:26:23.271 Command Set Identifier: NVM (00h) 00:26:23.271 Deallocate: Supported 00:26:23.271 Deallocated/Unwritten Error: Not Supported 00:26:23.271 Deallocated Read Value: Unknown 00:26:23.271 Deallocate in Write Zeroes: Not Supported 00:26:23.271 Deallocated Guard Field: 0xFFFF 00:26:23.271 Flush: Supported 00:26:23.271 Reservation: Not Supported 00:26:23.271 Namespace Sharing Capabilities: Multiple Controllers 00:26:23.271 Size (in LBAs): 3125627568 (1490GiB) 00:26:23.271 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:23.271 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:23.272 UUID: 10a8f69c-c03c-4c55-86c7-924f5297003b 00:26:23.272 Thin Provisioning: Not Supported 00:26:23.272 Per-NS Atomic Units: Yes 00:26:23.272 Atomic Boundary Size (Normal): 0 00:26:23.272 Atomic Boundary Size (PFail): 0 00:26:23.272 Atomic Boundary Offset: 0 00:26:23.272 NGUID/EUI64 Never Reused: No 00:26:23.272 ANA group ID: 1 00:26:23.272 Namespace Write Protected: No 00:26:23.272 Number of LBA Formats: 1 00:26:23.272 Current LBA Format: LBA Format #00 00:26:23.272 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:23.272 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.272 rmmod nvme_tcp 00:26:23.272 rmmod nvme_fabrics 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.272 05:48:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:25.812 05:48:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:28.349 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:28.349 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:29.731 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:29.731 00:26:29.731 real 0m17.179s 00:26:29.731 user 0m4.294s 00:26:29.731 sys 0m8.739s 00:26:29.731 05:48:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.731 05:48:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.731 ************************************ 00:26:29.731 END TEST nvmf_identify_kernel_target 00:26:29.731 ************************************ 00:26:29.731 05:48:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:29.731 05:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:29.731 05:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.731 05:48:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.991 ************************************ 00:26:29.991 START TEST nvmf_auth_host 00:26:29.991 ************************************ 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:29.992 * Looking for test storage... 00:26:29.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:29.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.992 --rc genhtml_branch_coverage=1 00:26:29.992 --rc genhtml_function_coverage=1 00:26:29.992 --rc genhtml_legend=1 00:26:29.992 --rc geninfo_all_blocks=1 00:26:29.992 --rc geninfo_unexecuted_blocks=1 00:26:29.992 00:26:29.992 ' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:29.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.992 --rc genhtml_branch_coverage=1 00:26:29.992 --rc genhtml_function_coverage=1 00:26:29.992 --rc genhtml_legend=1 00:26:29.992 --rc geninfo_all_blocks=1 00:26:29.992 --rc geninfo_unexecuted_blocks=1 00:26:29.992 00:26:29.992 ' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:29.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.992 --rc genhtml_branch_coverage=1 00:26:29.992 --rc genhtml_function_coverage=1 00:26:29.992 --rc genhtml_legend=1 00:26:29.992 --rc geninfo_all_blocks=1 00:26:29.992 --rc geninfo_unexecuted_blocks=1 00:26:29.992 00:26:29.992 ' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:29.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:29.992 --rc genhtml_branch_coverage=1 00:26:29.992 --rc genhtml_function_coverage=1 00:26:29.992 --rc genhtml_legend=1 00:26:29.992 --rc geninfo_all_blocks=1 00:26:29.992 --rc geninfo_unexecuted_blocks=1 00:26:29.992 00:26:29.992 ' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:29.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:29.992 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:29.993 05:48:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:36.573 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:36.573 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:36.573 Found net devices under 0000:86:00.0: cvl_0_0 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:36.573 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:36.574 Found net devices under 0000:86:00.1: cvl_0_1 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:26:36.574 00:26:36.574 --- 10.0.0.2 ping statistics --- 00:26:36.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.574 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:26:36.574 00:26:36.574 --- 10.0.0.1 ping statistics --- 00:26:36.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.574 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1894760 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1894760 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1894760 ']' 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.574 05:48:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a532eefde78a3b22ffdf2910f7a1b630 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fec 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a532eefde78a3b22ffdf2910f7a1b630 0 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a532eefde78a3b22ffdf2910f7a1b630 0 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a532eefde78a3b22ffdf2910f7a1b630 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fec 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fec 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fec 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e3cd6ecf58aa48a4974051f5ab1a91211d78099a52becee02cdc4a22e8ce8bb0 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.T31 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e3cd6ecf58aa48a4974051f5ab1a91211d78099a52becee02cdc4a22e8ce8bb0 3 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e3cd6ecf58aa48a4974051f5ab1a91211d78099a52becee02cdc4a22e8ce8bb0 3 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e3cd6ecf58aa48a4974051f5ab1a91211d78099a52becee02cdc4a22e8ce8bb0 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.T31 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.T31 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.T31 00:26:36.574 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9e2d7c9f7b4ed1712bbb960db4015b4810867ead1f43dee6 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jJB 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9e2d7c9f7b4ed1712bbb960db4015b4810867ead1f43dee6 0 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9e2d7c9f7b4ed1712bbb960db4015b4810867ead1f43dee6 0 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9e2d7c9f7b4ed1712bbb960db4015b4810867ead1f43dee6 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jJB 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jJB 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jJB 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2d4fe25f64ee1439f19d980d3165dd9ea55703966fc40f63 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.4Sc 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2d4fe25f64ee1439f19d980d3165dd9ea55703966fc40f63 2 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2d4fe25f64ee1439f19d980d3165dd9ea55703966fc40f63 2 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2d4fe25f64ee1439f19d980d3165dd9ea55703966fc40f63 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.4Sc 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.4Sc 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.4Sc 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e6b510671f84123bab6362f7778ecea5 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.PZG 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e6b510671f84123bab6362f7778ecea5 1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e6b510671f84123bab6362f7778ecea5 1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e6b510671f84123bab6362f7778ecea5 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.PZG 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.PZG 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.PZG 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb765aab454db38bd95a258320a9e3b3 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7YT 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb765aab454db38bd95a258320a9e3b3 1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb765aab454db38bd95a258320a9e3b3 1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb765aab454db38bd95a258320a9e3b3 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7YT 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7YT 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7YT 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:36.575 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1d6f2d4875b07127e999561af4b84828d4e029835ceefde9 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WtU 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1d6f2d4875b07127e999561af4b84828d4e029835ceefde9 2 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1d6f2d4875b07127e999561af4b84828d4e029835ceefde9 2 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1d6f2d4875b07127e999561af4b84828d4e029835ceefde9 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WtU 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WtU 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WtU 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1d6a516c41824c4082e47771d2d7f365 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.rbr 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1d6a516c41824c4082e47771d2d7f365 0 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1d6a516c41824c4082e47771d2d7f365 0 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1d6a516c41824c4082e47771d2d7f365 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.rbr 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.rbr 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.rbr 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc344251f3a5ec6780d2d3d72a3cff1b90a2ead3f70848d07d8e09a20bee7345 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nO1 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc344251f3a5ec6780d2d3d72a3cff1b90a2ead3f70848d07d8e09a20bee7345 3 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc344251f3a5ec6780d2d3d72a3cff1b90a2ead3f70848d07d8e09a20bee7345 3 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:36.835 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc344251f3a5ec6780d2d3d72a3cff1b90a2ead3f70848d07d8e09a20bee7345 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nO1 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nO1 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nO1 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1894760 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1894760 ']' 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:36.836 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fec 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.T31 ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.T31 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jJB 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.4Sc ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4Sc 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PZG 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7YT ]] 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7YT 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WtU 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.rbr ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.rbr 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nO1 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:37.096 05:48:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:40.390 Waiting for block devices as requested 00:26:40.390 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:40.390 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:40.390 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:40.390 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:40.390 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:40.390 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:40.390 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:40.390 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:40.390 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:40.650 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:40.650 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:40.650 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:40.650 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:40.910 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:40.910 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:40.910 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:41.169 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:41.737 No valid GPT data, bailing 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:41.737 00:26:41.737 Discovery Log Number of Records 2, Generation counter 2 00:26:41.737 =====Discovery Log Entry 0====== 00:26:41.737 trtype: tcp 00:26:41.737 adrfam: ipv4 00:26:41.737 subtype: current discovery subsystem 00:26:41.737 treq: not specified, sq flow control disable supported 00:26:41.737 portid: 1 00:26:41.737 trsvcid: 4420 00:26:41.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:41.737 traddr: 10.0.0.1 00:26:41.737 eflags: none 00:26:41.737 sectype: none 00:26:41.737 =====Discovery Log Entry 1====== 00:26:41.737 trtype: tcp 00:26:41.737 adrfam: ipv4 00:26:41.737 subtype: nvme subsystem 00:26:41.737 treq: not specified, sq flow control disable supported 00:26:41.737 portid: 1 00:26:41.737 trsvcid: 4420 00:26:41.737 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:41.737 traddr: 10.0.0.1 00:26:41.737 eflags: none 00:26:41.737 sectype: none 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.737 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.738 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.998 nvme0n1 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.998 05:48:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.256 nvme0n1 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.256 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.257 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.516 nvme0n1 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.516 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.776 nvme0n1 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.776 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.777 nvme0n1 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.777 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.036 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.037 nvme0n1 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.037 05:48:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.037 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.037 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.037 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.037 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.297 nvme0n1 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.297 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.558 nvme0n1 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.558 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.818 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.819 nvme0n1 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.819 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.079 nvme0n1 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.079 05:48:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.079 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.339 nvme0n1 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.339 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.340 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.614 nvme0n1 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.614 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:44.615 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.884 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.885 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.885 nvme0n1 00:26:44.885 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.885 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.885 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.885 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.885 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.885 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.144 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.144 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.145 05:48:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.404 nvme0n1 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.404 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.664 nvme0n1 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.664 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.665 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.924 nvme0n1 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.924 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.925 05:48:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.493 nvme0n1 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.493 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.753 nvme0n1 00:26:46.753 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.753 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.753 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.753 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.753 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.753 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.012 05:48:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.271 nvme0n1 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.271 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.531 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.531 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.531 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.791 nvme0n1 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.791 05:48:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.360 nvme0n1 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.360 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.929 nvme0n1 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.929 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.930 05:48:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.498 nvme0n1 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.498 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.499 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.499 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.499 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.499 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.499 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.067 nvme0n1 00:26:50.067 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.067 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.067 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.067 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.067 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.067 05:48:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.067 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.326 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.895 nvme0n1 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.895 05:48:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.464 nvme0n1 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.464 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.465 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.725 nvme0n1 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.725 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.985 nvme0n1 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.985 nvme0n1 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.985 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.244 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.244 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.244 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.244 05:48:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.244 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.245 nvme0n1 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.245 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.505 nvme0n1 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.505 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.506 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.765 nvme0n1 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.765 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.766 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.026 nvme0n1 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.026 05:48:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.026 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.286 nvme0n1 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.286 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.546 nvme0n1 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.546 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.806 nvme0n1 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.806 05:48:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.066 nvme0n1 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.066 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.325 nvme0n1 00:26:54.325 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.326 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.326 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.326 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.326 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.584 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.584 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.584 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.584 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.584 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.585 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 nvme0n1 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.844 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.845 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.845 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.845 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.845 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.104 nvme0n1 00:26:55.104 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.104 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.104 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.104 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.104 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.104 05:48:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.104 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.363 nvme0n1 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:55.363 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.622 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.882 nvme0n1 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.882 05:48:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.450 nvme0n1 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.450 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.451 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.451 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.710 nvme0n1 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.710 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.969 05:48:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.227 nvme0n1 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.227 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.228 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.852 nvme0n1 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.852 05:48:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.448 nvme0n1 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:58.448 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.449 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.081 nvme0n1 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.081 05:48:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.651 nvme0n1 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.651 05:48:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.220 nvme0n1 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.220 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.788 nvme0n1 00:27:00.788 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.788 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.788 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.788 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.788 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.788 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.047 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.048 05:48:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.048 nvme0n1 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.048 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.313 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.314 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.315 nvme0n1 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.315 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.316 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.578 nvme0n1 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.578 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.837 nvme0n1 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.837 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.838 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.097 nvme0n1 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.097 05:48:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.356 nvme0n1 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.356 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.357 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.615 nvme0n1 00:27:02.615 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.616 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.876 nvme0n1 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.876 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 nvme0n1 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 05:48:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.136 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.396 nvme0n1 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.396 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.656 nvme0n1 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.656 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.916 nvme0n1 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:03.916 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.917 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.176 05:48:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.176 nvme0n1 00:27:04.176 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.176 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.176 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.176 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.176 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.436 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.695 nvme0n1 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.695 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.696 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 nvme0n1 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.955 05:48:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.525 nvme0n1 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.525 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.784 nvme0n1 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.784 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.043 05:48:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 nvme0n1 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.302 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.303 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.871 nvme0n1 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.871 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.872 05:48:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.131 nvme0n1 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.131 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.132 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.132 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTUzMmVlZmRlNzhhM2IyMmZmZGYyOTEwZjdhMWI2MzByNWp1: 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: ]] 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTNjZDZlY2Y1OGFhNDhhNDk3NDA1MWY1YWIxYTkxMjExZDc4MDk5YTUyYmVjZWUwMmNkYzRhMjJlOGNlOGJiML7Q7pk=: 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.391 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.960 nvme0n1 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:07.960 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.961 05:48:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.529 nvme0n1 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.529 05:48:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.095 nvme0n1 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWQ2ZjJkNDg3NWIwNzEyN2U5OTk1NjFhZjRiODQ4MjhkNGUwMjk4MzVjZWVmZGU56cAfSA==: 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWQ2YTUxNmM0MTgyNGM0MDgyZTQ3NzcxZDJkN2YzNjURj/2/: 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.095 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.096 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.096 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.096 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 nvme0n1 00:27:09.661 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.661 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.661 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.661 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.661 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmMzNDQyNTFmM2E1ZWM2NzgwZDJkM2Q3MmEzY2ZmMWI5MGEyZWFkM2Y3MDg0OGQwN2Q4ZTA5YTIwYmVlNzM0NZJFRMc=: 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.920 05:48:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.508 nvme0n1 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.508 request: 00:27:10.508 { 00:27:10.508 "name": "nvme0", 00:27:10.508 "trtype": "tcp", 00:27:10.508 "traddr": "10.0.0.1", 00:27:10.508 "adrfam": "ipv4", 00:27:10.508 "trsvcid": "4420", 00:27:10.508 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:10.508 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:10.508 "prchk_reftag": false, 00:27:10.508 "prchk_guard": false, 00:27:10.508 "hdgst": false, 00:27:10.508 "ddgst": false, 00:27:10.508 "allow_unrecognized_csi": false, 00:27:10.508 "method": "bdev_nvme_attach_controller", 00:27:10.508 "req_id": 1 00:27:10.508 } 00:27:10.508 Got JSON-RPC error response 00:27:10.508 response: 00:27:10.508 { 00:27:10.508 "code": -5, 00:27:10.508 "message": "Input/output error" 00:27:10.508 } 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.508 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.767 request: 00:27:10.767 { 00:27:10.767 "name": "nvme0", 00:27:10.767 "trtype": "tcp", 00:27:10.767 "traddr": "10.0.0.1", 00:27:10.767 "adrfam": "ipv4", 00:27:10.767 "trsvcid": "4420", 00:27:10.767 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:10.767 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:10.767 "prchk_reftag": false, 00:27:10.767 "prchk_guard": false, 00:27:10.767 "hdgst": false, 00:27:10.767 "ddgst": false, 00:27:10.767 "dhchap_key": "key2", 00:27:10.767 "allow_unrecognized_csi": false, 00:27:10.767 "method": "bdev_nvme_attach_controller", 00:27:10.767 "req_id": 1 00:27:10.767 } 00:27:10.767 Got JSON-RPC error response 00:27:10.767 response: 00:27:10.767 { 00:27:10.767 "code": -5, 00:27:10.767 "message": "Input/output error" 00:27:10.767 } 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.767 request: 00:27:10.767 { 00:27:10.767 "name": "nvme0", 00:27:10.767 "trtype": "tcp", 00:27:10.767 "traddr": "10.0.0.1", 00:27:10.767 "adrfam": "ipv4", 00:27:10.767 "trsvcid": "4420", 00:27:10.767 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:10.767 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:10.767 "prchk_reftag": false, 00:27:10.767 "prchk_guard": false, 00:27:10.767 "hdgst": false, 00:27:10.767 "ddgst": false, 00:27:10.767 "dhchap_key": "key1", 00:27:10.767 "dhchap_ctrlr_key": "ckey2", 00:27:10.767 "allow_unrecognized_csi": false, 00:27:10.767 "method": "bdev_nvme_attach_controller", 00:27:10.767 "req_id": 1 00:27:10.767 } 00:27:10.767 Got JSON-RPC error response 00:27:10.767 response: 00:27:10.767 { 00:27:10.767 "code": -5, 00:27:10.767 "message": "Input/output error" 00:27:10.767 } 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.767 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.026 nvme0n1 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.026 05:48:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.026 request: 00:27:11.026 { 00:27:11.026 "name": "nvme0", 00:27:11.026 "dhchap_key": "key1", 00:27:11.026 "dhchap_ctrlr_key": "ckey2", 00:27:11.026 "method": "bdev_nvme_set_keys", 00:27:11.026 "req_id": 1 00:27:11.026 } 00:27:11.026 Got JSON-RPC error response 00:27:11.026 response: 00:27:11.026 { 00:27:11.026 "code": -13, 00:27:11.026 "message": "Permission denied" 00:27:11.026 } 00:27:11.026 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:11.026 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:11.026 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:11.026 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:11.026 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:11.284 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.284 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:11.284 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.284 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.284 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.284 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:11.284 05:48:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:12.219 05:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.219 05:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:12.219 05:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.219 05:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.219 05:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.219 05:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:12.219 05:49:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:13.154 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.154 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:13.154 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.154 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.154 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWUyZDdjOWY3YjRlZDE3MTJiYmI5NjBkYjQwMTViNDgxMDg2N2VhZDFmNDNkZWU28TX5eQ==: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmQ0ZmUyNWY2NGVlMTQzOWYxOWQ5ODBkMzE2NWRkOWVhNTU3MDM5NjZmYzQwZjYz83fTrQ==: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.413 nvme0n1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTZiNTEwNjcxZjg0MTIzYmFiNjM2MmY3Nzc4ZWNlYTVV6jjN: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2I3NjVhYWI0NTRkYjM4YmQ5NWEyNTgzMjBhOWUzYjN1qAZq: 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.413 request: 00:27:13.413 { 00:27:13.413 "name": "nvme0", 00:27:13.413 "dhchap_key": "key2", 00:27:13.413 "dhchap_ctrlr_key": "ckey1", 00:27:13.413 "method": "bdev_nvme_set_keys", 00:27:13.413 "req_id": 1 00:27:13.413 } 00:27:13.413 Got JSON-RPC error response 00:27:13.413 response: 00:27:13.413 { 00:27:13.413 "code": -13, 00:27:13.413 "message": "Permission denied" 00:27:13.413 } 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.413 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.672 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:13.672 05:49:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:14.610 rmmod nvme_tcp 00:27:14.610 rmmod nvme_fabrics 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1894760 ']' 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1894760 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1894760 ']' 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1894760 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894760 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894760' 00:27:14.610 killing process with pid 1894760 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1894760 00:27:14.610 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1894760 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.870 05:49:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:17.409 05:49:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:19.948 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:19.948 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:21.329 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:21.329 05:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fec /tmp/spdk.key-null.jJB /tmp/spdk.key-sha256.PZG /tmp/spdk.key-sha384.WtU /tmp/spdk.key-sha512.nO1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:21.329 05:49:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:24.621 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:24.621 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:24.621 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:24.621 00:27:24.621 real 0m54.358s 00:27:24.621 user 0m48.527s 00:27:24.621 sys 0m12.739s 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.622 ************************************ 00:27:24.622 END TEST nvmf_auth_host 00:27:24.622 ************************************ 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.622 ************************************ 00:27:24.622 START TEST nvmf_digest 00:27:24.622 ************************************ 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:24.622 * Looking for test storage... 00:27:24.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.622 --rc genhtml_branch_coverage=1 00:27:24.622 --rc genhtml_function_coverage=1 00:27:24.622 --rc genhtml_legend=1 00:27:24.622 --rc geninfo_all_blocks=1 00:27:24.622 --rc geninfo_unexecuted_blocks=1 00:27:24.622 00:27:24.622 ' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.622 --rc genhtml_branch_coverage=1 00:27:24.622 --rc genhtml_function_coverage=1 00:27:24.622 --rc genhtml_legend=1 00:27:24.622 --rc geninfo_all_blocks=1 00:27:24.622 --rc geninfo_unexecuted_blocks=1 00:27:24.622 00:27:24.622 ' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.622 --rc genhtml_branch_coverage=1 00:27:24.622 --rc genhtml_function_coverage=1 00:27:24.622 --rc genhtml_legend=1 00:27:24.622 --rc geninfo_all_blocks=1 00:27:24.622 --rc geninfo_unexecuted_blocks=1 00:27:24.622 00:27:24.622 ' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:24.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.622 --rc genhtml_branch_coverage=1 00:27:24.622 --rc genhtml_function_coverage=1 00:27:24.622 --rc genhtml_legend=1 00:27:24.622 --rc geninfo_all_blocks=1 00:27:24.622 --rc geninfo_unexecuted_blocks=1 00:27:24.622 00:27:24.622 ' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.622 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:24.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:24.623 05:49:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:31.200 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:31.200 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.200 05:49:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:31.200 Found net devices under 0000:86:00.0: cvl_0_0 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:31.200 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:31.201 Found net devices under 0000:86:00.1: cvl_0_1 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:31.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:27:31.201 00:27:31.201 --- 10.0.0.2 ping statistics --- 00:27:31.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.201 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:31.201 00:27:31.201 --- 10.0.0.1 ping statistics --- 00:27:31.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.201 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.201 ************************************ 00:27:31.201 START TEST nvmf_digest_clean 00:27:31.201 ************************************ 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1908535 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1908535 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1908535 ']' 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.201 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.201 [2024-11-27 05:49:18.377942] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:31.201 [2024-11-27 05:49:18.377983] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.201 [2024-11-27 05:49:18.458556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.201 [2024-11-27 05:49:18.498360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.202 [2024-11-27 05:49:18.498397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.202 [2024-11-27 05:49:18.498404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.202 [2024-11-27 05:49:18.498409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.202 [2024-11-27 05:49:18.498417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.202 [2024-11-27 05:49:18.498979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.202 null0 00:27:31.202 [2024-11-27 05:49:18.642523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.202 [2024-11-27 05:49:18.666732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1908557 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1908557 /var/tmp/bperf.sock 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1908557 ']' 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:31.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.202 [2024-11-27 05:49:18.725765] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:31.202 [2024-11-27 05:49:18.725806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908557 ] 00:27:31.202 [2024-11-27 05:49:18.801797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.202 [2024-11-27 05:49:18.843082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:31.202 05:49:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:31.202 05:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.202 05:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.769 nvme0n1 00:27:31.769 05:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:31.769 05:49:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.769 Running I/O for 2 seconds... 00:27:33.644 24790.00 IOPS, 96.84 MiB/s [2024-11-27T04:49:21.648Z] 25502.50 IOPS, 99.62 MiB/s 00:27:33.644 Latency(us) 00:27:33.644 [2024-11-27T04:49:21.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.644 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:33.644 nvme0n1 : 2.00 25519.93 99.69 0.00 0.00 5010.62 2605.84 11858.90 00:27:33.644 [2024-11-27T04:49:21.648Z] =================================================================================================================== 00:27:33.644 [2024-11-27T04:49:21.648Z] Total : 25519.93 99.69 0.00 0.00 5010.62 2605.84 11858.90 00:27:33.644 { 00:27:33.644 "results": [ 00:27:33.644 { 00:27:33.644 "job": "nvme0n1", 00:27:33.644 "core_mask": "0x2", 00:27:33.644 "workload": "randread", 00:27:33.644 "status": "finished", 00:27:33.644 "queue_depth": 128, 00:27:33.644 "io_size": 4096, 00:27:33.644 "runtime": 2.00365, 00:27:33.644 "iops": 25519.926134803984, 00:27:33.644 "mibps": 99.68721146407806, 00:27:33.644 "io_failed": 0, 00:27:33.644 "io_timeout": 0, 00:27:33.644 "avg_latency_us": 5010.621477472846, 00:27:33.644 "min_latency_us": 2605.8361904761905, 00:27:33.644 "max_latency_us": 11858.895238095238 00:27:33.644 } 00:27:33.645 ], 00:27:33.645 "core_count": 1 00:27:33.645 } 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:33.903 | select(.opcode=="crc32c") 00:27:33.903 | "\(.module_name) \(.executed)"' 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1908557 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1908557 ']' 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1908557 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.903 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908557 00:27:34.162 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:34.162 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:34.162 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908557' 00:27:34.162 killing process with pid 1908557 00:27:34.162 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1908557 00:27:34.162 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.162 00:27:34.163 Latency(us) 00:27:34.163 [2024-11-27T04:49:22.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.163 [2024-11-27T04:49:22.167Z] =================================================================================================================== 00:27:34.163 [2024-11-27T04:49:22.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.163 05:49:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1908557 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1909034 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1909034 /var/tmp/bperf.sock 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1909034 ']' 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.163 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:34.163 [2024-11-27 05:49:22.124989] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:34.163 [2024-11-27 05:49:22.125035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909034 ] 00:27:34.163 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:34.163 Zero copy mechanism will not be used. 00:27:34.422 [2024-11-27 05:49:22.199392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.422 [2024-11-27 05:49:22.236243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.422 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.422 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:34.422 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:34.422 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:34.422 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:34.682 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.682 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.941 nvme0n1 00:27:34.941 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:34.941 05:49:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:35.199 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:35.199 Zero copy mechanism will not be used. 00:27:35.199 Running I/O for 2 seconds... 00:27:37.072 5950.00 IOPS, 743.75 MiB/s [2024-11-27T04:49:25.076Z] 5888.00 IOPS, 736.00 MiB/s 00:27:37.072 Latency(us) 00:27:37.072 [2024-11-27T04:49:25.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.072 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:37.072 nvme0n1 : 2.00 5891.36 736.42 0.00 0.00 2713.34 651.46 8800.55 00:27:37.072 [2024-11-27T04:49:25.076Z] =================================================================================================================== 00:27:37.072 [2024-11-27T04:49:25.076Z] Total : 5891.36 736.42 0.00 0.00 2713.34 651.46 8800.55 00:27:37.072 { 00:27:37.072 "results": [ 00:27:37.072 { 00:27:37.072 "job": "nvme0n1", 00:27:37.072 "core_mask": "0x2", 00:27:37.072 "workload": "randread", 00:27:37.072 "status": "finished", 00:27:37.072 "queue_depth": 16, 00:27:37.072 "io_size": 131072, 00:27:37.072 "runtime": 2.001575, 00:27:37.072 "iops": 5891.360553564068, 00:27:37.072 "mibps": 736.4200691955085, 00:27:37.072 "io_failed": 0, 00:27:37.072 "io_timeout": 0, 00:27:37.072 "avg_latency_us": 2713.3414692769916, 00:27:37.072 "min_latency_us": 651.4590476190476, 00:27:37.072 "max_latency_us": 8800.548571428571 00:27:37.072 } 00:27:37.072 ], 00:27:37.072 "core_count": 1 00:27:37.072 } 00:27:37.072 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:37.072 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:37.072 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:37.072 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:37.072 | select(.opcode=="crc32c") 00:27:37.072 | "\(.module_name) \(.executed)"' 00:27:37.072 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1909034 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1909034 ']' 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1909034 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909034 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909034' 00:27:37.331 killing process with pid 1909034 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1909034 00:27:37.331 Received shutdown signal, test time was about 2.000000 seconds 00:27:37.331 00:27:37.331 Latency(us) 00:27:37.331 [2024-11-27T04:49:25.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.331 [2024-11-27T04:49:25.335Z] =================================================================================================================== 00:27:37.331 [2024-11-27T04:49:25.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.331 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1909034 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1909719 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1909719 /var/tmp/bperf.sock 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1909719 ']' 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:37.590 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:37.590 [2024-11-27 05:49:25.491747] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:37.590 [2024-11-27 05:49:25.491799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909719 ] 00:27:37.590 [2024-11-27 05:49:25.566097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.849 [2024-11-27 05:49:25.607919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.849 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:37.849 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:37.849 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:37.849 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:37.849 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:38.108 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.108 05:49:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.368 nvme0n1 00:27:38.368 05:49:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:38.368 05:49:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:38.368 Running I/O for 2 seconds... 00:27:40.683 28163.00 IOPS, 110.01 MiB/s [2024-11-27T04:49:28.687Z] 28350.50 IOPS, 110.74 MiB/s 00:27:40.683 Latency(us) 00:27:40.683 [2024-11-27T04:49:28.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.683 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:40.683 nvme0n1 : 2.01 28348.85 110.74 0.00 0.00 4509.47 2106.51 15042.07 00:27:40.683 [2024-11-27T04:49:28.687Z] =================================================================================================================== 00:27:40.683 [2024-11-27T04:49:28.687Z] Total : 28348.85 110.74 0.00 0.00 4509.47 2106.51 15042.07 00:27:40.683 { 00:27:40.683 "results": [ 00:27:40.683 { 00:27:40.683 "job": "nvme0n1", 00:27:40.683 "core_mask": "0x2", 00:27:40.683 "workload": "randwrite", 00:27:40.683 "status": "finished", 00:27:40.683 "queue_depth": 128, 00:27:40.683 "io_size": 4096, 00:27:40.683 "runtime": 2.006889, 00:27:40.683 "iops": 28348.852377984033, 00:27:40.683 "mibps": 110.73770460150013, 00:27:40.683 "io_failed": 0, 00:27:40.683 "io_timeout": 0, 00:27:40.683 "avg_latency_us": 4509.465709213536, 00:27:40.683 "min_latency_us": 2106.5142857142855, 00:27:40.683 "max_latency_us": 15042.07238095238 00:27:40.683 } 00:27:40.683 ], 00:27:40.683 "core_count": 1 00:27:40.683 } 00:27:40.683 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:40.683 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:40.683 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:40.683 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:40.683 | select(.opcode=="crc32c") 00:27:40.683 | "\(.module_name) \(.executed)"' 00:27:40.683 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:40.683 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:40.683 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1909719 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1909719 ']' 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1909719 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909719 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909719' 00:27:40.684 killing process with pid 1909719 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1909719 00:27:40.684 Received shutdown signal, test time was about 2.000000 seconds 00:27:40.684 00:27:40.684 Latency(us) 00:27:40.684 [2024-11-27T04:49:28.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.684 [2024-11-27T04:49:28.688Z] =================================================================================================================== 00:27:40.684 [2024-11-27T04:49:28.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.684 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1909719 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1910208 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1910208 /var/tmp/bperf.sock 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1910208 ']' 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:40.943 [2024-11-27 05:49:28.778381] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:40.943 [2024-11-27 05:49:28.778424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910208 ] 00:27:40.943 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:40.943 Zero copy mechanism will not be used. 00:27:40.943 [2024-11-27 05:49:28.854081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.943 [2024-11-27 05:49:28.891365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:40.943 05:49:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:41.511 05:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.511 05:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:41.771 nvme0n1 00:27:41.771 05:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:41.771 05:49:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:41.771 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:41.771 Zero copy mechanism will not be used. 00:27:41.771 Running I/O for 2 seconds... 00:27:44.087 5921.00 IOPS, 740.12 MiB/s [2024-11-27T04:49:32.091Z] 6303.00 IOPS, 787.88 MiB/s 00:27:44.087 Latency(us) 00:27:44.087 [2024-11-27T04:49:32.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.087 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:44.087 nvme0n1 : 2.00 6300.42 787.55 0.00 0.00 2535.14 1810.04 11734.06 00:27:44.087 [2024-11-27T04:49:32.091Z] =================================================================================================================== 00:27:44.087 [2024-11-27T04:49:32.091Z] Total : 6300.42 787.55 0.00 0.00 2535.14 1810.04 11734.06 00:27:44.087 { 00:27:44.087 "results": [ 00:27:44.087 { 00:27:44.087 "job": "nvme0n1", 00:27:44.087 "core_mask": "0x2", 00:27:44.087 "workload": "randwrite", 00:27:44.087 "status": "finished", 00:27:44.087 "queue_depth": 16, 00:27:44.087 "io_size": 131072, 00:27:44.087 "runtime": 2.0032, 00:27:44.087 "iops": 6300.419329073482, 00:27:44.087 "mibps": 787.5524161341853, 00:27:44.087 "io_failed": 0, 00:27:44.087 "io_timeout": 0, 00:27:44.087 "avg_latency_us": 2535.1424396980087, 00:27:44.087 "min_latency_us": 1810.0419047619048, 00:27:44.087 "max_latency_us": 11734.064761904761 00:27:44.087 } 00:27:44.087 ], 00:27:44.087 "core_count": 1 00:27:44.087 } 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:44.087 | select(.opcode=="crc32c") 00:27:44.087 | "\(.module_name) \(.executed)"' 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1910208 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1910208 ']' 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1910208 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.087 05:49:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1910208 00:27:44.087 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:44.087 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:44.087 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1910208' 00:27:44.087 killing process with pid 1910208 00:27:44.087 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1910208 00:27:44.087 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.087 00:27:44.087 Latency(us) 00:27:44.087 [2024-11-27T04:49:32.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.088 [2024-11-27T04:49:32.092Z] =================================================================================================================== 00:27:44.088 [2024-11-27T04:49:32.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.088 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1910208 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1908535 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1908535 ']' 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1908535 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908535 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908535' 00:27:44.347 killing process with pid 1908535 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1908535 00:27:44.347 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1908535 00:27:44.607 00:27:44.607 real 0m14.067s 00:27:44.607 user 0m26.893s 00:27:44.607 sys 0m4.635s 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.607 ************************************ 00:27:44.607 END TEST nvmf_digest_clean 00:27:44.607 ************************************ 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:44.607 ************************************ 00:27:44.607 START TEST nvmf_digest_error 00:27:44.607 ************************************ 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1910859 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1910859 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1910859 ']' 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.607 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.607 [2024-11-27 05:49:32.510932] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:44.607 [2024-11-27 05:49:32.510971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.607 [2024-11-27 05:49:32.589561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.867 [2024-11-27 05:49:32.630264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.867 [2024-11-27 05:49:32.630301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.867 [2024-11-27 05:49:32.630308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.867 [2024-11-27 05:49:32.630314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.867 [2024-11-27 05:49:32.630319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.867 [2024-11-27 05:49:32.630888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.867 [2024-11-27 05:49:32.703344] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:44.867 null0 00:27:44.867 [2024-11-27 05:49:32.799374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.867 [2024-11-27 05:49:32.823578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1910945 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1910945 /var/tmp/bperf.sock 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1910945 ']' 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.867 05:49:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.126 [2024-11-27 05:49:32.873555] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:45.126 [2024-11-27 05:49:32.873597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910945 ] 00:27:45.126 [2024-11-27 05:49:32.946046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.126 [2024-11-27 05:49:32.986298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.126 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.126 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:45.126 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.126 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:45.385 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:45.385 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.385 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.385 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.385 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.385 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.644 nvme0n1 00:27:45.644 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:45.644 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.644 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.644 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.644 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:45.644 05:49:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.904 Running I/O for 2 seconds... 00:27:45.904 [2024-11-27 05:49:33.665832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.665871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.665882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.675741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.675766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.675775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.685348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.685372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.685381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.694753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.694774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.694783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.704087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.704107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.704116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.712804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.712825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.712833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.722471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.722493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.722501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.732109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.732131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.732139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.741750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.741771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.741778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.751621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.751642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.751650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.759800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.759821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.759829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.772149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.772170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.904 [2024-11-27 05:49:33.772179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.904 [2024-11-27 05:49:33.783323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.904 [2024-11-27 05:49:33.783344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.783356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.791854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.791876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.791884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.802306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.802326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.802334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.810634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.810655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.810663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.819667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.819693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.819701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.829138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.829161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.829169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.839006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.839027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.839035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.848683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.848704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.848712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.859734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.859755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.859763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.868145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.868170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.868179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.878018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.878039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.878047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.887664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.887691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.887699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:45.905 [2024-11-27 05:49:33.896562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:45.905 [2024-11-27 05:49:33.896583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.905 [2024-11-27 05:49:33.896593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.905978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.906000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.906010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.914820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.914841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.914851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.925249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.925271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.925281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.934006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.934028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.934036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.944183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.944205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.944213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.955707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.955729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.955738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.963574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.963595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.963604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.974038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.974059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.974067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.983862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.983883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.983892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:33.994303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:33.994324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:33.994332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.002723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.002745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:34.002754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.013178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.013200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:34.013208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.024611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.024632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:34.024640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.033303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.033324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:34.033338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.044644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.044664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:34.044679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.054207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.054228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:34.054236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.063846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.063866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.166 [2024-11-27 05:49:34.063874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.166 [2024-11-27 05:49:34.072174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.166 [2024-11-27 05:49:34.072195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.072205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.081252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.081275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.081283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.091480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.091503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.091511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.102646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.102667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.102683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.113779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.113800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.113808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.126162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.126187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.126194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.134654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.134680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.134688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.147358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.147379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.147387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.167 [2024-11-27 05:49:34.159544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.167 [2024-11-27 05:49:34.159564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.167 [2024-11-27 05:49:34.159572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.171754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.171775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.171784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.180213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.180233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.180241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.190664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.190690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.190698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.203014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.203035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.203043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.212833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.212854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.212865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.221373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.221394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.221402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.233542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.233562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.233571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.241789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.241809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.241818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.251998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.252017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.252025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.262420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.262441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.262449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.273589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.273609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.273617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.283033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.283053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.283061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.295158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.295179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.295188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.303662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.303691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.303700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.315446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.427 [2024-11-27 05:49:34.315466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.427 [2024-11-27 05:49:34.315474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.427 [2024-11-27 05:49:34.326685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.326705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.326713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.335920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.335939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.335948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.347265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.347286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.347294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.359061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.359082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.359090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.368634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.368653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.368662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.378032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.378052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.378061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.387076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.387097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.387105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.397640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.397660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.397674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.405732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.405752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.405760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.428 [2024-11-27 05:49:34.417539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.428 [2024-11-27 05:49:34.417561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.428 [2024-11-27 05:49:34.417569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.430540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.430561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.430570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.441908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.441928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.441937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.450087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.450107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.450115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.461656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.461683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.461691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.469850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.469870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.469879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.480984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.481005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.481017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.489423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.489444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.489452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.501114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.501135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.501143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.513664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.513690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.688 [2024-11-27 05:49:34.513698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.688 [2024-11-27 05:49:34.523636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.688 [2024-11-27 05:49:34.523657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.523665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.532277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.532298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.532307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.541947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.541968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.541975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.551418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.551438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.551446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.560915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.560935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.560943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.570370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.570394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.570403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.580556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.580578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.580586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.590154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.590175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.590183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.602301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.602322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.602329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.610956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.610979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.610988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.623414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.623434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.623442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.636229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.636251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.636259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.648755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.648776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.648784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 25149.00 IOPS, 98.24 MiB/s [2024-11-27T04:49:34.693Z] [2024-11-27 05:49:34.658017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.658036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.658044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.668300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.668322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.668331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.680502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.680523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.680532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.689 [2024-11-27 05:49:34.688965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.689 [2024-11-27 05:49:34.688986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.689 [2024-11-27 05:49:34.688994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.699745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.699767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.699775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.709049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.709070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.709079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.718254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.718275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.718284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.727841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.727861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.727869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.736851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.736871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.736879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.747242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.747263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.747274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.759595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.759616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.759624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.770706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.770727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.770736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.779726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.779746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.779755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.792935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.792956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.792964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.803878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.803898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.803906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.812454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.812473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.812482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.823251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.823271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.823279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.831226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.831247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.831256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.842654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.842680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.842689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.851304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.851324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.851332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.863121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.863142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.863150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.874343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.874364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.874373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.883107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.883128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.883137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.893929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.893950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.893958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.904958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.904979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.904987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.913782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.913803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.913811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.922067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.922087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.922098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.932055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.932075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.932084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:46.948 [2024-11-27 05:49:34.942593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:46.948 [2024-11-27 05:49:34.942614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:46.948 [2024-11-27 05:49:34.942622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:34.953803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:34.953825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:34.953833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:34.961948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:34.961968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:34.961976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:34.974280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:34.974302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:34.974311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:34.982871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:34.982892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:34.982900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:34.994443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:34.994464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:34.994472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:35.005338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:35.005359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:35.005367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:35.019134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:35.019159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:35.019167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:35.031422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:35.031442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:35.031450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:35.039805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.208 [2024-11-27 05:49:35.039827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.208 [2024-11-27 05:49:35.039837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.208 [2024-11-27 05:49:35.050045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.050066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.050074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.059878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.059899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.059907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.071724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.071748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.071756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.080209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.080230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.080239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.091597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.091618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.091626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.102824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.102845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.102853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.110690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.110712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.110720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.123113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.123135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.123142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.134954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.134977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.134985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.145003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.145024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.145032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.153218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.153239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.153248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.164776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.164796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.164804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.172894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.172913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.172921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.183524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.183544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.183552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.194310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.194330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.194343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.209 [2024-11-27 05:49:35.204404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.209 [2024-11-27 05:49:35.204424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.209 [2024-11-27 05:49:35.204432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.214928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.214950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.214959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.224061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.224081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.224090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.233270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.233289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.233297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.245022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.245044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.245052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.254799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.254820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.254828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.263445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.263465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.263474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.273899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.273935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.273943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.286164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.286189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.286197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.296034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.296055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.296063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.304355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.304375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.304383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.315012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.315031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.315039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.327403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.327423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.327430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.337860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.337880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.337888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.346933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.346962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.346971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.356157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.356178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.356186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.365681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.365719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.365738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.374317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.374338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.374346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.384463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.384484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.469 [2024-11-27 05:49:35.384492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.469 [2024-11-27 05:49:35.392910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.469 [2024-11-27 05:49:35.392931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.392939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.403466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.403487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.403494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.411557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.411578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.411586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.421766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.421788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.421796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.429746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.429768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.429776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.439925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.439946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.439955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.449650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.449677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.449690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.459798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.459819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.459827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.470 [2024-11-27 05:49:35.469980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.470 [2024-11-27 05:49:35.470000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.470 [2024-11-27 05:49:35.470008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.478660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.478687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.478696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.488027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.488049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.488057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.499088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.499108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.499116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.507011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.507031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.507039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.518454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.518475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.518483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.526709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.526731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.526739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.537936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.537957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.537966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.547827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.547847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.547855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.556184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.556204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.556212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.565601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.565622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.565630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.574931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.574952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.574960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.584843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.584864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.584872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.593235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.593271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.593279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.604919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.604940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.604948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.615897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.615917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.615930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.624465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.624486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.624496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.635065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.635086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.635093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.643937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.643957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.643965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 [2024-11-27 05:49:35.653414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc3c6b0) 00:27:47.730 [2024-11-27 05:49:35.653436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:47.730 [2024-11-27 05:49:35.653444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.730 25292.50 IOPS, 98.80 MiB/s 00:27:47.730 Latency(us) 00:27:47.730 [2024-11-27T04:49:35.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.730 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:47.730 nvme0n1 : 2.04 24804.05 96.89 0.00 0.00 5053.64 2200.14 44439.65 00:27:47.730 [2024-11-27T04:49:35.734Z] =================================================================================================================== 00:27:47.730 [2024-11-27T04:49:35.734Z] Total : 24804.05 96.89 0.00 0.00 5053.64 2200.14 44439.65 00:27:47.730 { 00:27:47.730 "results": [ 00:27:47.730 { 00:27:47.730 "job": "nvme0n1", 00:27:47.730 "core_mask": "0x2", 00:27:47.730 "workload": "randread", 00:27:47.730 "status": "finished", 00:27:47.730 "queue_depth": 128, 00:27:47.730 "io_size": 4096, 00:27:47.730 "runtime": 2.044545, 00:27:47.730 "iops": 24804.05175723694, 00:27:47.730 "mibps": 96.8908271767068, 00:27:47.730 "io_failed": 0, 00:27:47.730 "io_timeout": 0, 00:27:47.730 "avg_latency_us": 5053.644002317429, 00:27:47.730 "min_latency_us": 2200.137142857143, 00:27:47.730 "max_latency_us": 44439.64952380952 00:27:47.730 } 00:27:47.730 ], 00:27:47.730 "core_count": 1 00:27:47.730 } 00:27:47.730 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:47.730 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:47.730 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:47.730 | .driver_specific 00:27:47.730 | .nvme_error 00:27:47.730 | .status_code 00:27:47.730 | .command_transient_transport_error' 00:27:47.730 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1910945 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1910945 ']' 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1910945 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1910945 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1910945' 00:27:47.990 killing process with pid 1910945 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1910945 00:27:47.990 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.990 00:27:47.990 Latency(us) 00:27:47.990 [2024-11-27T04:49:35.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.990 [2024-11-27T04:49:35.994Z] =================================================================================================================== 00:27:47.990 [2024-11-27T04:49:35.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.990 05:49:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1910945 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1911420 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1911420 /var/tmp/bperf.sock 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1911420 ']' 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.250 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.250 [2024-11-27 05:49:36.171991] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:48.250 [2024-11-27 05:49:36.172041] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911420 ] 00:27:48.250 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.250 Zero copy mechanism will not be used. 00:27:48.250 [2024-11-27 05:49:36.248713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.508 [2024-11-27 05:49:36.290592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.508 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.508 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:48.509 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.509 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:48.768 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:48.768 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.768 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:48.768 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.768 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.768 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.028 nvme0n1 00:27:49.028 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:49.028 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.028 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.028 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.028 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:49.028 05:49:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.028 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:49.028 Zero copy mechanism will not be used. 00:27:49.029 Running I/O for 2 seconds... 00:27:49.029 [2024-11-27 05:49:36.984006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.029 [2024-11-27 05:49:36.984040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.029 [2024-11-27 05:49:36.984050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.029 [2024-11-27 05:49:36.990446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.029 [2024-11-27 05:49:36.990471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.029 [2024-11-27 05:49:36.990481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.029 [2024-11-27 05:49:36.994365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.029 [2024-11-27 05:49:36.994389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.029 [2024-11-27 05:49:36.994398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.029 [2024-11-27 05:49:37.002131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.029 [2024-11-27 05:49:37.002160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.029 [2024-11-27 05:49:37.002169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.029 [2024-11-27 05:49:37.009945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.029 [2024-11-27 05:49:37.009968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.029 [2024-11-27 05:49:37.009976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.029 [2024-11-27 05:49:37.018302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.029 [2024-11-27 05:49:37.018325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.029 [2024-11-27 05:49:37.018334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.029 [2024-11-27 05:49:37.024797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.029 [2024-11-27 05:49:37.024820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.029 [2024-11-27 05:49:37.024828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.031085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.031109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.031118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.036888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.036910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.036929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.042594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.042616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.042625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.047990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.048012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.048021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.053528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.053551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.053559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.059369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.059392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.059400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.064652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.064680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.064689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.070207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.070229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.070238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.075621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.075643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.290 [2024-11-27 05:49:37.075652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.290 [2024-11-27 05:49:37.081039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.290 [2024-11-27 05:49:37.081061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.081069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.086290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.086312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.086321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.091592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.091614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.091622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.096935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.096957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.096965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.102458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.102479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.102491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.108022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.108043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.108051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.113485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.113512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.118994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.119016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.119024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.124374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.124396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.124404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.129408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.129431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.129439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.134758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.134780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.134788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.140024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.140045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.140053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.145111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.145133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.145141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.150339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.150363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.150371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.155816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.155837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.155844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.161296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.161318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.161326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.166687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.166709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.166716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.172084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.172107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.172115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.177480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.177503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.177511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.182777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.182800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.182807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.188117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.188139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.188146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.191005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.191025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.191033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.196321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.196342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.196350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.201477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.201497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.201505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.206770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.206789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.206798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.211997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.212017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.212025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.217442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.217463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.291 [2024-11-27 05:49:37.217471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.291 [2024-11-27 05:49:37.222985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.291 [2024-11-27 05:49:37.223006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.223014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.228334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.228354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.228362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.233777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.233798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.233806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.239954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.239979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.239987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.244490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.244512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.244521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.249919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.249940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.249948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.255351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.255372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.255380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.260644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.260665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.260679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.265930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.265950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.265958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.271443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.271463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.271471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.276870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.276891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.276899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.282414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.282435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.282442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.292 [2024-11-27 05:49:37.288530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.292 [2024-11-27 05:49:37.288550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.292 [2024-11-27 05:49:37.288558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.294319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.294340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.294348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.301234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.301256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.301265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.308589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.308612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.308621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.315452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.315475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.315483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.323210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.323234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.323243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.331550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.331574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.331582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.338348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.338371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.338380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.343152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.343174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.343186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.348387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.348410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.348418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.353570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.353592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.353599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.358767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.358789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.358796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.364066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.364087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.364095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.369514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.369536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.369544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.374980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.375001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.553 [2024-11-27 05:49:37.375010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.553 [2024-11-27 05:49:37.380308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.553 [2024-11-27 05:49:37.380330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.380337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.385514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.385535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.385543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.390785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.390810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.390819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.396050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.396071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.396080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.401294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.401315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.401322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.406514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.406536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.406544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.411743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.411764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.411772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.416946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.416967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.416974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.422136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.422157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.422166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.427350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.427371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.427378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.432513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.432534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.432542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.437708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.437729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.437737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.442848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.442869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.442877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.447951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.447972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.447979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.453144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.453165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.453173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.458311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.458332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.458340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.463498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.463520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.463528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.468682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.468703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.468711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.473846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.473867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.473875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.478978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.478999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.479010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.484191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.484213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.484220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.489387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.489409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.489417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.494562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.494583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.494592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.499844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.499866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.499874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.505118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.505140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.505148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.510419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.510440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.510449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.515578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.554 [2024-11-27 05:49:37.515599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.554 [2024-11-27 05:49:37.515607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.554 [2024-11-27 05:49:37.520810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.555 [2024-11-27 05:49:37.520832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.555 [2024-11-27 05:49:37.520840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.555 [2024-11-27 05:49:37.525964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.555 [2024-11-27 05:49:37.525989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.555 [2024-11-27 05:49:37.525997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.555 [2024-11-27 05:49:37.531164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.555 [2024-11-27 05:49:37.531184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.555 [2024-11-27 05:49:37.531192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.555 [2024-11-27 05:49:37.536335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.555 [2024-11-27 05:49:37.536356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.555 [2024-11-27 05:49:37.536364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.555 [2024-11-27 05:49:37.541483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.555 [2024-11-27 05:49:37.541504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.555 [2024-11-27 05:49:37.541512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.555 [2024-11-27 05:49:37.546650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.555 [2024-11-27 05:49:37.546677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.555 [2024-11-27 05:49:37.546686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.555 [2024-11-27 05:49:37.551896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.555 [2024-11-27 05:49:37.551918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.555 [2024-11-27 05:49:37.551926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.557197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.557219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.557227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.562360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.562381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.562389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.567519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.567540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.567548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.572677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.572700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.572707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.577900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.577920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.577929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.583923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.583945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.583953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.591300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.591322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.591331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.599171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.599193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.599201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.606426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.606457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.613831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.613854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.613863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.621347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.621369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.621378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.629110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.815 [2024-11-27 05:49:37.629136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.815 [2024-11-27 05:49:37.629144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.815 [2024-11-27 05:49:37.636703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.636724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.636733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.643983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.644005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.644013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.651546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.651569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.651577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.658797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.658819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.658828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.666718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.666740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.666748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.674132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.674155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.674164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.681142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.681164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.681172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.688681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.688703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.688712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.696133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.696155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.696164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.701285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.701308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.701317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.706496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.706517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.706525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.711708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.711730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.711739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.716620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.716642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.716651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.721751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.721773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.721781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.726791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.726812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.726820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.731946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.731967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.731975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.737335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.737357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.737368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.743663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.743691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.743700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.749190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.749212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.749220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.754521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.754544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.759932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.759954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.759962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.765356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.765378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.765385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.770825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.770847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.770855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.776333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.776354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.776362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.781580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.781602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.786803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.786829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.786836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.790295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.790316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.790324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.794559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.794581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.794588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.799806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.799827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.799835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.805008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.805029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.805037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.810211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.810232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.810240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:49.816 [2024-11-27 05:49:37.815412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:49.816 [2024-11-27 05:49:37.815433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:49.816 [2024-11-27 05:49:37.815441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.076 [2024-11-27 05:49:37.820626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.820649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.820657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.825813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.825834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.825842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.830999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.831021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.831029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.836177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.836198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.836206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.841359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.841381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.841388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.846606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.846627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.846635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.851775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.851795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.851803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.856893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.856914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.856922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.862081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.862102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.862110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.867244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.867265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.867273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.872435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.872458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.872469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.877607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.877628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.877638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.882851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.882873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.882881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.888014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.888035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.888044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.893193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.893214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.893222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.898668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.898697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.898705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.904590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.904612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.904621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.910855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.910878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.910888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.916132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.916155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.916163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.921356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.921379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.921387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.926549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.926571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.926578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.931758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.931780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.931789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.936960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.936981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.936989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.942214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.942235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.942243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.947458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.947479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.077 [2024-11-27 05:49:37.947488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.077 [2024-11-27 05:49:37.952692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.077 [2024-11-27 05:49:37.952714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.952722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:37.957858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.957880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.957888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:37.963018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.963040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.963052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:37.968542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.968564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.968573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:37.974687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.974710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.974718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:37.980210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.980231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.980239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.078 5521.00 IOPS, 690.12 MiB/s [2024-11-27T04:49:38.082Z] [2024-11-27 05:49:37.986846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.986868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.986876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:37.992098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.992120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.992128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:37.997337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:37.997358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:37.997367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.002561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.002583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.002593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.007885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.007906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.007915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.013150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.013176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.013185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.018390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.018412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.018420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.023614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.023635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.023643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.028865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.028887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.028895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.034025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.034046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.034054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.039186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.039207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.039215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.044420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.044441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.044449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.049659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.049685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.049693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.054894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.054915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.054923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.060125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.078 [2024-11-27 05:49:38.060147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.078 [2024-11-27 05:49:38.060155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.078 [2024-11-27 05:49:38.065319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.079 [2024-11-27 05:49:38.065340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.079 [2024-11-27 05:49:38.065348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.079 [2024-11-27 05:49:38.070487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.079 [2024-11-27 05:49:38.070508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.079 [2024-11-27 05:49:38.070516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.079 [2024-11-27 05:49:38.075742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.079 [2024-11-27 05:49:38.075763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.079 [2024-11-27 05:49:38.075770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.081017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.081038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.081046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.085917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.085938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.085945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.091052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.091073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.091081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.096225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.096246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.096254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.101434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.101454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.101466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.106661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.106689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.106698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.111852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.111873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.111881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.117032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.117054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.117062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.122274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.122294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.122303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.127484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.127505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.127513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.132732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.132753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.339 [2024-11-27 05:49:38.132761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.339 [2024-11-27 05:49:38.137955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.339 [2024-11-27 05:49:38.137976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.137983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.143157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.143178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.143186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.148397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.148422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.148430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.153646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.153667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.153683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.158848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.158869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.158877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.164195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.164216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.164225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.170077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.170098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.170107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.175334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.175356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.175364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.181030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.181051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.181058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.186281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.186302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.186310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.191529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.191550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.191557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.196771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.196792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.196800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.202052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.202074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.202082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.207318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.207340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.207348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.212277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.212298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.212306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.217402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.217423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.217431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.222376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.222397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.222406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.227437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.227458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.227465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.232748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.232770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.232778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.238113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.238138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.238146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.340 [2024-11-27 05:49:38.243274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.340 [2024-11-27 05:49:38.243295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.340 [2024-11-27 05:49:38.243303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.248565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.248586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.248595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.253912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.253933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.253941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.259211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.259247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.259256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.264602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.264623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.264631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.269933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.269959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.269967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.275219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.275241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.275249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.280490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.280511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.280519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.285745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.285766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.285774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.290446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.290469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.290477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.295636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.295658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.295666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.300906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.300927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.300934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.306119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.306140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.306148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.311305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.311326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.311334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.316538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.316559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.316567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.321782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.321803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.321812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.327018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.327038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.327050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.332250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.332271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.332279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.341 [2024-11-27 05:49:38.337480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.341 [2024-11-27 05:49:38.337501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.341 [2024-11-27 05:49:38.337509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.342700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.342721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.342730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.347913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.347934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.347943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.353192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.353212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.353220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.358370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.358391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.358399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.363581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.363602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.363610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.368815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.368835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.368843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.374144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.374169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.374177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.379433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.379454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.379462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.384559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.384579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.384587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.389750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.603 [2024-11-27 05:49:38.389771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.603 [2024-11-27 05:49:38.389778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.603 [2024-11-27 05:49:38.394928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.394949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.394956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.400170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.400191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.400198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.405406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.405427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.411444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.411466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.411474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.416850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.416871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.416879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.422050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.422071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.422079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.427200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.427221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.427229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.432432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.432454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.432462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.437612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.437633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.437641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.442823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.442844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.442852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.448680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.448701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.448709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.454042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.454063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.454071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.459215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.459235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.459243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.464332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.464353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.469536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.469556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.469565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.474736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.474757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.474766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.479914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.479935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.479944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.485065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.485086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.485093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.490212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.490232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.490241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.495414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.495435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.495443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.500618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.500639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.500646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.505891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.505912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.505920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.511184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.511209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.511219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.516525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.516546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.516554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.521721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.521742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.521751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.526891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.526913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.526921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.532132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.604 [2024-11-27 05:49:38.532154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.604 [2024-11-27 05:49:38.532161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.604 [2024-11-27 05:49:38.537371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.537392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.537400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.542544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.542565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.542572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.547720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.547741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.547749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.552924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.552945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.552957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.558471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.558493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.558501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.563964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.563985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.563993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.569448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.569469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.569477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.575080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.575101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.575109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.580892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.580913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.580920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.586281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.586302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.586310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.591566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.591588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.591596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.596863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.596884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.596892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-11-27 05:49:38.602271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.605 [2024-11-27 05:49:38.602299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-11-27 05:49:38.602306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.865 [2024-11-27 05:49:38.607817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.865 [2024-11-27 05:49:38.607839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.865 [2024-11-27 05:49:38.607847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.865 [2024-11-27 05:49:38.613432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.613452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.613459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.619029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.619050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.619058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.624499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.624520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.624527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.630253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.630274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.630281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.635684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.635706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.635714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.641440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.641462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.641470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.646728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.646749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.646757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.652117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.652139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.652147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.657438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.657460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.657468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.662776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.662797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.662806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.668443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.668464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.668472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.673945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.673966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.673974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.679578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.679599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.679607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.685003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.685024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.685032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.690353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.690373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.690381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.695727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.695748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.695760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.701126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.701147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.701155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.706501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.706522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.706530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.711808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.711829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.711837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.717304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.717325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.717333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.722621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.722642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.722650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.866 [2024-11-27 05:49:38.728048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.866 [2024-11-27 05:49:38.728069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.866 [2024-11-27 05:49:38.728077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.733502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.733524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.733532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.739072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.739094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.739101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.744471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.744496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.744505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.749808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.749829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.755360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.755381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.755389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.760638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.760659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.760667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.766057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.766079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.766087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.771446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.771468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.771476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.776751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.776771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.776779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.781938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.781960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.781968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.787332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.787370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.787379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.792747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.792768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.792776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.797967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.797988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.797997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.803343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.803364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.803372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.808727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.808748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.808756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.814093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.814114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.814121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.819402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.819423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.819431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.824690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.824711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.824719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.829952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.829973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.829981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.835189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.835210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.835222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.840623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.867 [2024-11-27 05:49:38.840643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-11-27 05:49:38.840651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-11-27 05:49:38.846144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.868 [2024-11-27 05:49:38.846166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-11-27 05:49:38.846174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-11-27 05:49:38.851484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.868 [2024-11-27 05:49:38.851505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-11-27 05:49:38.851513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-11-27 05:49:38.856741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.868 [2024-11-27 05:49:38.856762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-11-27 05:49:38.856770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-11-27 05:49:38.862034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:50.868 [2024-11-27 05:49:38.862055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-11-27 05:49:38.862063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.127 [2024-11-27 05:49:38.867409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.867432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.867440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.872749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.872770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.872778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.878159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.878180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.878189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.883533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.883555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.883563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.888835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.888856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.888864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.894113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.894134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.894142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.899466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.899487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.899495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.904999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.905020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.905028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.910199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.910220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.910228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.915449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.915470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.915478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.920712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.920733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.920741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.926022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.926043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.926055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.931288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.931308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.931316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.936794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.936815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.936823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.942109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.942130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.942137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.947373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.947394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.947402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.952705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.952727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.952734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.958084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.958105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.958113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.963604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.963625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.963633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.967123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.967144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.967152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.128 [2024-11-27 05:49:38.971306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.128 [2024-11-27 05:49:38.971332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.128 [2024-11-27 05:49:38.971340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.129 [2024-11-27 05:49:38.976697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.129 [2024-11-27 05:49:38.976718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-11-27 05:49:38.976726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.129 [2024-11-27 05:49:38.982023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.129 [2024-11-27 05:49:38.982044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-11-27 05:49:38.982053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.129 5698.00 IOPS, 712.25 MiB/s [2024-11-27T04:49:39.133Z] [2024-11-27 05:49:38.988318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f0c1a0) 00:27:51.129 [2024-11-27 05:49:38.988339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-11-27 05:49:38.988347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.129 00:27:51.129 Latency(us) 00:27:51.129 [2024-11-27T04:49:39.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.129 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:51.129 nvme0n1 : 2.00 5695.22 711.90 0.00 0.00 2806.24 635.86 8488.47 00:27:51.129 [2024-11-27T04:49:39.133Z] =================================================================================================================== 00:27:51.129 [2024-11-27T04:49:39.133Z] Total : 5695.22 711.90 0.00 0.00 2806.24 635.86 8488.47 00:27:51.129 { 00:27:51.129 "results": [ 00:27:51.129 { 00:27:51.129 "job": "nvme0n1", 00:27:51.129 "core_mask": "0x2", 00:27:51.129 "workload": "randread", 00:27:51.129 "status": "finished", 00:27:51.129 "queue_depth": 16, 00:27:51.129 "io_size": 131072, 00:27:51.129 "runtime": 2.003784, 00:27:51.129 "iops": 5695.224634990598, 00:27:51.129 "mibps": 711.9030793738248, 00:27:51.129 "io_failed": 0, 00:27:51.129 "io_timeout": 0, 00:27:51.129 "avg_latency_us": 2806.2353132041458, 00:27:51.129 "min_latency_us": 635.8552380952381, 00:27:51.129 "max_latency_us": 8488.47238095238 00:27:51.129 } 00:27:51.129 ], 00:27:51.129 "core_count": 1 00:27:51.129 } 00:27:51.129 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:51.129 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:51.129 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:51.129 | .driver_specific 00:27:51.129 | .nvme_error 00:27:51.129 | .status_code 00:27:51.129 | .command_transient_transport_error' 00:27:51.129 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 369 > 0 )) 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1911420 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1911420 ']' 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1911420 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1911420 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1911420' 00:27:51.390 killing process with pid 1911420 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1911420 00:27:51.390 Received shutdown signal, test time was about 2.000000 seconds 00:27:51.390 00:27:51.390 Latency(us) 00:27:51.390 [2024-11-27T04:49:39.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.390 [2024-11-27T04:49:39.394Z] =================================================================================================================== 00:27:51.390 [2024-11-27T04:49:39.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.390 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1911420 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1911945 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1911945 /var/tmp/bperf.sock 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1911945 ']' 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.650 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.650 [2024-11-27 05:49:39.473164] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:51.650 [2024-11-27 05:49:39.473213] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911945 ] 00:27:51.650 [2024-11-27 05:49:39.547838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.650 [2024-11-27 05:49:39.590114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:51.909 05:49:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.168 nvme0n1 00:27:52.168 05:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:52.168 05:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.168 05:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.168 05:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.168 05:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:52.168 05:49:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:52.427 Running I/O for 2 seconds... 00:27:52.427 [2024-11-27 05:49:40.268548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eed4e8 00:27:52.427 [2024-11-27 05:49:40.269359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.427 [2024-11-27 05:49:40.269387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:52.427 [2024-11-27 05:49:40.278238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8a50 00:27:52.427 [2024-11-27 05:49:40.278828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.427 [2024-11-27 05:49:40.278851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:52.427 [2024-11-27 05:49:40.288970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee7818 00:27:52.427 [2024-11-27 05:49:40.290385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.427 [2024-11-27 05:49:40.290404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:52.427 [2024-11-27 05:49:40.298712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efa3a0 00:27:52.427 [2024-11-27 05:49:40.300214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.427 [2024-11-27 05:49:40.300233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:52.427 [2024-11-27 05:49:40.305249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edfdc0 00:27:52.427 [2024-11-27 05:49:40.305936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.427 [2024-11-27 05:49:40.305954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:52.427 [2024-11-27 05:49:40.315276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef1ca0 00:27:52.427 [2024-11-27 05:49:40.316229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.427 [2024-11-27 05:49:40.316247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.324679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ede038 00:27:52.428 [2024-11-27 05:49:40.325148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.325167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.334350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee0ea0 00:27:52.428 [2024-11-27 05:49:40.334946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.334965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.344051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6300 00:27:52.428 [2024-11-27 05:49:40.344772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.344791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.353065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef96f8 00:27:52.428 [2024-11-27 05:49:40.354097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.354115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.362369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee1710 00:27:52.428 [2024-11-27 05:49:40.363327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.363345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.372022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edece0 00:27:52.428 [2024-11-27 05:49:40.372740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.372759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.381006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee4de8 00:27:52.428 [2024-11-27 05:49:40.382048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.382068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.390538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeaef0 00:27:52.428 [2024-11-27 05:49:40.391601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.391620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.400179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eee190 00:27:52.428 [2024-11-27 05:49:40.401365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.401383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.409871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeff18 00:27:52.428 [2024-11-27 05:49:40.411178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.411197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.419553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efc998 00:27:52.428 [2024-11-27 05:49:40.420996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.428 [2024-11-27 05:49:40.421014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:52.428 [2024-11-27 05:49:40.429215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef3a28 00:27:52.688 [2024-11-27 05:49:40.430767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.430786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.435711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0ff8 00:27:52.688 [2024-11-27 05:49:40.436452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.436470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.445162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef20d8 00:27:52.688 [2024-11-27 05:49:40.445783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.445802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.454659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef96f8 00:27:52.688 [2024-11-27 05:49:40.455477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.455495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.463880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef92c0 00:27:52.688 [2024-11-27 05:49:40.464836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.464858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.473319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee5658 00:27:52.688 [2024-11-27 05:49:40.473823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.473842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.482983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef1430 00:27:52.688 [2024-11-27 05:49:40.483596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.483614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.491998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eec840 00:27:52.688 [2024-11-27 05:49:40.492902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.492920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.501539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef6890 00:27:52.688 [2024-11-27 05:49:40.502518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.502537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.510992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef6020 00:27:52.688 [2024-11-27 05:49:40.511505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.511524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.520853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee38d0 00:27:52.688 [2024-11-27 05:49:40.521493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.521512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.529840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eef6a8 00:27:52.688 [2024-11-27 05:49:40.530772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.530791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.539093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efac10 00:27:52.688 [2024-11-27 05:49:40.539955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.539973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.549648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee4140 00:27:52.688 [2024-11-27 05:49:40.551016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.551034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.558151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efef90 00:27:52.688 [2024-11-27 05:49:40.559509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.688 [2024-11-27 05:49:40.559527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:52.688 [2024-11-27 05:49:40.566104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edf550 00:27:52.689 [2024-11-27 05:49:40.566840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.566858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.575442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee73e0 00:27:52.689 [2024-11-27 05:49:40.576186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.576204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.584713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6738 00:27:52.689 [2024-11-27 05:49:40.585426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.585445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.594963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6300 00:27:52.689 [2024-11-27 05:49:40.595733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.595751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.603633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eebb98 00:27:52.689 [2024-11-27 05:49:40.604870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.604889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.612766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0bc0 00:27:52.689 [2024-11-27 05:49:40.613779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.613797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.621725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eea680 00:27:52.689 [2024-11-27 05:49:40.622806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.622823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.631415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee01f8 00:27:52.689 [2024-11-27 05:49:40.632606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.632625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.639787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee3d08 00:27:52.689 [2024-11-27 05:49:40.640555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.640573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.648983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eef6a8 00:27:52.689 [2024-11-27 05:49:40.649628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.649646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.659138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efcdd0 00:27:52.689 [2024-11-27 05:49:40.660439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.660458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.667816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee8088 00:27:52.689 [2024-11-27 05:49:40.669104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.669122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.676185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eef270 00:27:52.689 [2024-11-27 05:49:40.677137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.677155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:52.689 [2024-11-27 05:49:40.685111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef1ca0 00:27:52.689 [2024-11-27 05:49:40.686104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.689 [2024-11-27 05:49:40.686122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:52.950 [2024-11-27 05:49:40.694263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee8d30 00:27:52.950 [2024-11-27 05:49:40.695250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.950 [2024-11-27 05:49:40.695269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:52.950 [2024-11-27 05:49:40.703663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edf118 00:27:52.950 [2024-11-27 05:49:40.704421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.950 [2024-11-27 05:49:40.704443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.950 [2024-11-27 05:49:40.712845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef3a28 00:27:52.950 [2024-11-27 05:49:40.713941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.950 [2024-11-27 05:49:40.713959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.950 [2024-11-27 05:49:40.721846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efa3a0 00:27:52.950 [2024-11-27 05:49:40.722919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.950 [2024-11-27 05:49:40.722937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.950 [2024-11-27 05:49:40.730808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef7100 00:27:52.950 [2024-11-27 05:49:40.731891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.950 [2024-11-27 05:49:40.731910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.739845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efc998 00:27:52.951 [2024-11-27 05:49:40.740946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.740965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.748854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eecc78 00:27:52.951 [2024-11-27 05:49:40.749929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.749946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.757936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee27f0 00:27:52.951 [2024-11-27 05:49:40.758953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.758971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.766823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee3498 00:27:52.951 [2024-11-27 05:49:40.767823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.767841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.775065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ede8a8 00:27:52.951 [2024-11-27 05:49:40.776419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.776437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.783190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eed920 00:27:52.951 [2024-11-27 05:49:40.783868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.783886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.793806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edf118 00:27:52.951 [2024-11-27 05:49:40.794685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.794704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.803002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0ff8 00:27:52.951 [2024-11-27 05:49:40.803896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.811555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eebb98 00:27:52.951 [2024-11-27 05:49:40.812435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.812452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.821041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee0ea0 00:27:52.951 [2024-11-27 05:49:40.822023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.822041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.829890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeb760 00:27:52.951 [2024-11-27 05:49:40.830640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.830658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.838215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eea248 00:27:52.951 [2024-11-27 05:49:40.838954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.838972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.848880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eea248 00:27:52.951 [2024-11-27 05:49:40.850082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.850101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.858322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee3060 00:27:52.951 [2024-11-27 05:49:40.859676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.859695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.867766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef7da8 00:27:52.951 [2024-11-27 05:49:40.869205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.869223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.876636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eed920 00:27:52.951 [2024-11-27 05:49:40.878118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.878137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.885559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee27f0 00:27:52.951 [2024-11-27 05:49:40.886777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.886796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.893922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef9b30 00:27:52.951 [2024-11-27 05:49:40.894999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.895017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.902577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef6020 00:27:52.951 [2024-11-27 05:49:40.903442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.903461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.911514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef6020 00:27:52.951 [2024-11-27 05:49:40.912365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.912384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.920456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef6020 00:27:52.951 [2024-11-27 05:49:40.921308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.921327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.928790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef7da8 00:27:52.951 [2024-11-27 05:49:40.929624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.929641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.937797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eea248 00:27:52.951 [2024-11-27 05:49:40.938619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.938640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:52.951 [2024-11-27 05:49:40.947276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef35f0 00:27:52.951 [2024-11-27 05:49:40.948269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:52.951 [2024-11-27 05:49:40.948287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:40.958076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6fa8 00:27:53.212 [2024-11-27 05:49:40.959287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:40.959306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:40.965320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eef6a8 00:27:53.212 [2024-11-27 05:49:40.966048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:40.966067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:40.974618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef9f68 00:27:53.212 [2024-11-27 05:49:40.975361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:40.975379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:40.983747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef9b30 00:27:53.212 [2024-11-27 05:49:40.984449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:40.984468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:40.992118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee4140 00:27:53.212 [2024-11-27 05:49:40.992833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:40.992851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:41.001032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee12d8 00:27:53.212 [2024-11-27 05:49:41.001753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:41.001771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:41.010565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee3498 00:27:53.212 [2024-11-27 05:49:41.011389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:41.011408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:41.019738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee23b8 00:27:53.212 [2024-11-27 05:49:41.020560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:41.020578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:41.028033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef7100 00:27:53.212 [2024-11-27 05:49:41.028625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.212 [2024-11-27 05:49:41.028644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:53.212 [2024-11-27 05:49:41.037617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef7100 00:27:53.213 [2024-11-27 05:49:41.038224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.038242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.046635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef7100 00:27:53.213 [2024-11-27 05:49:41.047278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.047296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.055587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee3d08 00:27:53.213 [2024-11-27 05:49:41.056203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.056221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.065189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee1710 00:27:53.213 [2024-11-27 05:49:41.066017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.066035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.074168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee73e0 00:27:53.213 [2024-11-27 05:49:41.075011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.075029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.084969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee73e0 00:27:53.213 [2024-11-27 05:49:41.086253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.086272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.092751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee12d8 00:27:53.213 [2024-11-27 05:49:41.093569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.093587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.101760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efa3a0 00:27:53.213 [2024-11-27 05:49:41.102581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.102599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.110598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6300 00:27:53.213 [2024-11-27 05:49:41.111392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.111410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.119048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef46d0 00:27:53.213 [2024-11-27 05:49:41.119747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.119765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.127852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edfdc0 00:27:53.213 [2024-11-27 05:49:41.128544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.128562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.137030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efc560 00:27:53.213 [2024-11-27 05:49:41.137697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.137715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.147688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efc560 00:27:53.213 [2024-11-27 05:49:41.148830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.148847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.155458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efcdd0 00:27:53.213 [2024-11-27 05:49:41.155922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.155941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.164722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee95a0 00:27:53.213 [2024-11-27 05:49:41.165374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.165391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.172964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eebb98 00:27:53.213 [2024-11-27 05:49:41.173689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.173707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.183990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8e88 00:27:53.213 [2024-11-27 05:49:41.185136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.185155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.193386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8618 00:27:53.213 [2024-11-27 05:49:41.194640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.194657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.201225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.213 [2024-11-27 05:49:41.202005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.202023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:53.213 [2024-11-27 05:49:41.210257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.213 [2024-11-27 05:49:41.211067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.213 [2024-11-27 05:49:41.211085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.219396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.474 [2024-11-27 05:49:41.220260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.220278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.228345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.474 [2024-11-27 05:49:41.229217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.229235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.237306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.474 [2024-11-27 05:49:41.238172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.238189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.246238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.474 [2024-11-27 05:49:41.247106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.247124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:53.474 27697.00 IOPS, 108.19 MiB/s [2024-11-27T04:49:41.478Z] [2024-11-27 05:49:41.255112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.474 [2024-11-27 05:49:41.255970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.255987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.264421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeff18 00:27:53.474 [2024-11-27 05:49:41.265080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.265099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.273563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef35f0 00:27:53.474 [2024-11-27 05:49:41.274598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.274615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.282818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eea680 00:27:53.474 [2024-11-27 05:49:41.283839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.283857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.291227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeaab8 00:27:53.474 [2024-11-27 05:49:41.292540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.292558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.300518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eec408 00:27:53.474 [2024-11-27 05:49:41.301417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.301435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.309487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee4140 00:27:53.474 [2024-11-27 05:49:41.310330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.310347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.318593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeaef0 00:27:53.474 [2024-11-27 05:49:41.319497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.319515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.328826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efb048 00:27:53.474 [2024-11-27 05:49:41.330135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.330152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.338313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef92c0 00:27:53.474 [2024-11-27 05:49:41.339733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.339751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.344632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eec840 00:27:53.474 [2024-11-27 05:49:41.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.345252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.354723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef3e60 00:27:53.474 [2024-11-27 05:49:41.356198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.356216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.362592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef4298 00:27:53.474 [2024-11-27 05:49:41.363302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.363320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.372617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee5658 00:27:53.474 [2024-11-27 05:49:41.373482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.373501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.381782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eebb98 00:27:53.474 [2024-11-27 05:49:41.382655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.382679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.390747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeea00 00:27:53.474 [2024-11-27 05:49:41.391614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.391632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.399814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef1430 00:27:53.474 [2024-11-27 05:49:41.400691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.400710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.408885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee38d0 00:27:53.474 [2024-11-27 05:49:41.409751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.409772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.474 [2024-11-27 05:49:41.417862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eff3c8 00:27:53.474 [2024-11-27 05:49:41.418722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.474 [2024-11-27 05:49:41.418741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:53.475 [2024-11-27 05:49:41.427077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee84c0 00:27:53.475 [2024-11-27 05:49:41.427728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.475 [2024-11-27 05:49:41.427746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:53.475 [2024-11-27 05:49:41.436466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee2c28 00:27:53.475 [2024-11-27 05:49:41.437230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.475 [2024-11-27 05:49:41.437249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:53.475 [2024-11-27 05:49:41.445584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef7100 00:27:53.475 [2024-11-27 05:49:41.446690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.475 [2024-11-27 05:49:41.446708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:53.475 [2024-11-27 05:49:41.453788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eddc00 00:27:53.475 [2024-11-27 05:49:41.455228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.475 [2024-11-27 05:49:41.455246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:53.475 [2024-11-27 05:49:41.462251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef1430 00:27:53.475 [2024-11-27 05:49:41.462968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.475 [2024-11-27 05:49:41.462986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.475 [2024-11-27 05:49:41.471232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeea00 00:27:53.475 [2024-11-27 05:49:41.471987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.475 [2024-11-27 05:49:41.472006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.735 [2024-11-27 05:49:41.480468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6fa8 00:27:53.735 [2024-11-27 05:49:41.481212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.735 [2024-11-27 05:49:41.481231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.735 [2024-11-27 05:49:41.489505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef20d8 00:27:53.735 [2024-11-27 05:49:41.490249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.735 [2024-11-27 05:49:41.490270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.735 [2024-11-27 05:49:41.498575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8618 00:27:53.735 [2024-11-27 05:49:41.499328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.735 [2024-11-27 05:49:41.499345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.735 [2024-11-27 05:49:41.507585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8a50 00:27:53.735 [2024-11-27 05:49:41.508355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.735 [2024-11-27 05:49:41.508373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.735 [2024-11-27 05:49:41.516633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.736 [2024-11-27 05:49:41.517398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.517417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.525183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.525903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.534831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.535571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.535590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.544056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.544816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.544835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.553058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.553779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.553798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.562034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.562753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.562771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.570975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.571697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.571715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.580028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.580768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.580786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.589036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.589754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.589773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.598021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.598747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.598765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.606951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.607674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.607693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.615943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.616688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.624841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0788 00:27:53.736 [2024-11-27 05:49:41.625565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.625583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.634055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee99d8 00:27:53.736 [2024-11-27 05:49:41.634576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.643158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efe2e8 00:27:53.736 [2024-11-27 05:49:41.644017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.644034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.652423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ede038 00:27:53.736 [2024-11-27 05:49:41.653062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.653080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.661524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8e88 00:27:53.736 [2024-11-27 05:49:41.662501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.662519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.670564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef9b30 00:27:53.736 [2024-11-27 05:49:41.671533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.671551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.679712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeb328 00:27:53.736 [2024-11-27 05:49:41.680678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.680696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.688722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efbcf0 00:27:53.736 [2024-11-27 05:49:41.689689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.689707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.697761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee0a68 00:27:53.736 [2024-11-27 05:49:41.698744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.698762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.706840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee2c28 00:27:53.736 [2024-11-27 05:49:41.707801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.707818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.715822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eee190 00:27:53.736 [2024-11-27 05:49:41.716689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.716706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.724207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ede470 00:27:53.736 [2024-11-27 05:49:41.725183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.725203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:53.736 [2024-11-27 05:49:41.733338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efc128 00:27:53.736 [2024-11-27 05:49:41.733876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.736 [2024-11-27 05:49:41.733894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.742931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efdeb0 00:27:53.997 [2024-11-27 05:49:41.743557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.743575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.753271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eed4e8 00:27:53.997 [2024-11-27 05:49:41.754711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.754727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.762418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efcdd0 00:27:53.997 [2024-11-27 05:49:41.763853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.763871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.769874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef57b0 00:27:53.997 [2024-11-27 05:49:41.770529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.770547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.779398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:53.997 [2024-11-27 05:49:41.780144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.780163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.787898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee2c28 00:27:53.997 [2024-11-27 05:49:41.789197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.789216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.795849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee3498 00:27:53.997 [2024-11-27 05:49:41.796574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.796592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.805343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0bc0 00:27:53.997 [2024-11-27 05:49:41.806186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.806204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.814464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efeb58 00:27:53.997 [2024-11-27 05:49:41.815292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.997 [2024-11-27 05:49:41.815310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:53.997 [2024-11-27 05:49:41.825168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8a50 00:27:53.998 [2024-11-27 05:49:41.826355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.826372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.833643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8a50 00:27:53.998 [2024-11-27 05:49:41.834723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.834741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.842075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efef90 00:27:53.998 [2024-11-27 05:49:41.843040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.843058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.850783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef8a50 00:27:53.998 [2024-11-27 05:49:41.851736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.851753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.859945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efdeb0 00:27:53.998 [2024-11-27 05:49:41.860882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.860900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.869198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edf550 00:27:53.998 [2024-11-27 05:49:41.870170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.870188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.878155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2510 00:27:53.998 [2024-11-27 05:49:41.879226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.879245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.887211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef0ff8 00:27:53.998 [2024-11-27 05:49:41.888292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.888310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.896917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6300 00:27:53.998 [2024-11-27 05:49:41.898216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.898233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.906302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeaab8 00:27:53.998 [2024-11-27 05:49:41.907735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.907754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.915773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef4f40 00:27:53.998 [2024-11-27 05:49:41.917317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.917335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.922111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edfdc0 00:27:53.998 [2024-11-27 05:49:41.922828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.922846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.931628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee8d30 00:27:53.998 [2024-11-27 05:49:41.932622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.932640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.941164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee0630 00:27:53.998 [2024-11-27 05:49:41.942238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.942256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.950275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee73e0 00:27:53.998 [2024-11-27 05:49:41.951354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.951371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.959212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee5658 00:27:53.998 [2024-11-27 05:49:41.959868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.959890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.970027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee7818 00:27:53.998 [2024-11-27 05:49:41.971577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.971593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.976447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ede8a8 00:27:53.998 [2024-11-27 05:49:41.977198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.977215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.985626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef35f0 00:27:53.998 [2024-11-27 05:49:41.986274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.986293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:53.998 [2024-11-27 05:49:41.995140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee95a0 00:27:53.998 [2024-11-27 05:49:41.996161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:53.998 [2024-11-27 05:49:41.996180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.004554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee5220 00:27:54.259 [2024-11-27 05:49:42.005081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.005099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.014010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee1710 00:27:54.259 [2024-11-27 05:49:42.014668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.014693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.022477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eddc00 00:27:54.259 [2024-11-27 05:49:42.023710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.023728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.031853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeaab8 00:27:54.259 [2024-11-27 05:49:42.033175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.033193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.039596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef2948 00:27:54.259 [2024-11-27 05:49:42.040368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.040386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.049328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edece0 00:27:54.259 [2024-11-27 05:49:42.050198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.050215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.058410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eea680 00:27:54.259 [2024-11-27 05:49:42.058826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.058844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.068822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee5220 00:27:54.259 [2024-11-27 05:49:42.070041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.070058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.078197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee9168 00:27:54.259 [2024-11-27 05:49:42.079539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.079557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.087691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee73e0 00:27:54.259 [2024-11-27 05:49:42.089161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.089179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.094325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee95a0 00:27:54.259 [2024-11-27 05:49:42.095071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.095090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.103825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee5220 00:27:54.259 [2024-11-27 05:49:42.104707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.104725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.115082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016efd208 00:27:54.259 [2024-11-27 05:49:42.116466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.116485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.121687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edfdc0 00:27:54.259 [2024-11-27 05:49:42.122248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.122265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.132346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edfdc0 00:27:54.259 [2024-11-27 05:49:42.133463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.133480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.141486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6300 00:27:54.259 [2024-11-27 05:49:42.142623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.142640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:54.259 [2024-11-27 05:49:42.150075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef1430 00:27:54.259 [2024-11-27 05:49:42.151084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.259 [2024-11-27 05:49:42.151101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.160748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6300 00:27:54.260 [2024-11-27 05:49:42.162226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.162244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.167091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee23b8 00:27:54.260 [2024-11-27 05:49:42.167757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.167774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.176052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6b70 00:27:54.260 [2024-11-27 05:49:42.176819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.176836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.185500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eeaab8 00:27:54.260 [2024-11-27 05:49:42.186396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.186414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.194617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016eef270 00:27:54.260 [2024-11-27 05:49:42.195066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.195088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.204080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016edece0 00:27:54.260 [2024-11-27 05:49:42.204642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.204660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.214492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee12d8 00:27:54.260 [2024-11-27 05:49:42.215798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.215817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.222293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee6b70 00:27:54.260 [2024-11-27 05:49:42.223098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.223117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.231486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef1868 00:27:54.260 [2024-11-27 05:49:42.232170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.232190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.239936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee5a90 00:27:54.260 [2024-11-27 05:49:42.241201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.241218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:54.260 [2024-11-27 05:49:42.247921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ee2c28 00:27:54.260 [2024-11-27 05:49:42.248570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.260 [2024-11-27 05:49:42.248588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:54.260 27994.50 IOPS, 109.35 MiB/s [2024-11-27T04:49:42.264Z] [2024-11-27 05:49:42.259607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260180) with pdu=0x200016ef3a28 00:27:54.519 [2024-11-27 05:49:42.260901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:54.519 [2024-11-27 05:49:42.260920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:54.519 00:27:54.519 Latency(us) 00:27:54.520 [2024-11-27T04:49:42.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.520 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:54.520 nvme0n1 : 2.01 27994.86 109.35 0.00 0.00 4568.37 1778.83 15354.15 00:27:54.520 [2024-11-27T04:49:42.524Z] =================================================================================================================== 00:27:54.520 [2024-11-27T04:49:42.524Z] Total : 27994.86 109.35 0.00 0.00 4568.37 1778.83 15354.15 00:27:54.520 { 00:27:54.520 "results": [ 00:27:54.520 { 00:27:54.520 "job": "nvme0n1", 00:27:54.520 "core_mask": "0x2", 00:27:54.520 "workload": "randwrite", 00:27:54.520 "status": "finished", 00:27:54.520 "queue_depth": 128, 00:27:54.520 "io_size": 4096, 00:27:54.520 "runtime": 2.006797, 00:27:54.520 "iops": 27994.859470090894, 00:27:54.520 "mibps": 109.35491980504256, 00:27:54.520 "io_failed": 0, 00:27:54.520 "io_timeout": 0, 00:27:54.520 "avg_latency_us": 4568.365337808745, 00:27:54.520 "min_latency_us": 1778.8342857142857, 00:27:54.520 "max_latency_us": 15354.148571428572 00:27:54.520 } 00:27:54.520 ], 00:27:54.520 "core_count": 1 00:27:54.520 } 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:54.520 | .driver_specific 00:27:54.520 | .nvme_error 00:27:54.520 | .status_code 00:27:54.520 | .command_transient_transport_error' 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1911945 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1911945 ']' 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1911945 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.520 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1911945 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1911945' 00:27:54.779 killing process with pid 1911945 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1911945 00:27:54.779 Received shutdown signal, test time was about 2.000000 seconds 00:27:54.779 00:27:54.779 Latency(us) 00:27:54.779 [2024-11-27T04:49:42.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.779 [2024-11-27T04:49:42.783Z] =================================================================================================================== 00:27:54.779 [2024-11-27T04:49:42.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1911945 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:54.779 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1912584 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1912584 /var/tmp/bperf.sock 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1912584 ']' 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:54.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.780 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.780 [2024-11-27 05:49:42.725075] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:27:54.780 [2024-11-27 05:49:42.725126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912584 ] 00:27:54.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:54.780 Zero copy mechanism will not be used. 00:27:55.039 [2024-11-27 05:49:42.800084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.039 [2024-11-27 05:49:42.841693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.039 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.039 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:55.039 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:55.039 05:49:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:55.297 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:55.297 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.297 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.297 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.297 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.297 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.556 nvme0n1 00:27:55.556 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:55.556 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.556 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.817 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.817 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:55.817 05:49:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:55.817 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:55.817 Zero copy mechanism will not be used. 00:27:55.817 Running I/O for 2 seconds... 00:27:55.817 [2024-11-27 05:49:43.661966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.662035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.662061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.668061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.668121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.668141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.672748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.672809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.672833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.677373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.677444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.677462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.681893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.681964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.681982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.686767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.686898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.686919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.692363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.692532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.692551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.698388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.698514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.698534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.704063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.704228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.704251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.710564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.710644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.710664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.716586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.716739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.716758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.722796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.722960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.722979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.729169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.729333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.729353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.735951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.736085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.736104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.743752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.743881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.743900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.751168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.751333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.751352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.758601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.758779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.758799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.765684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.765817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.765839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.772820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.772984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.773003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.779930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.780084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.780103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.786806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.786978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.786997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.794460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.794606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.794625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.801632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.817 [2024-11-27 05:49:43.801761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.817 [2024-11-27 05:49:43.801781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:55.817 [2024-11-27 05:49:43.809269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.818 [2024-11-27 05:49:43.809423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.818 [2024-11-27 05:49:43.809442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:55.818 [2024-11-27 05:49:43.815358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:55.818 [2024-11-27 05:49:43.815416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.818 [2024-11-27 05:49:43.815434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.820001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.820065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.820084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.825207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.825341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.825360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.830254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.830351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.830370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.835311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.835461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.835480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.840311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.840405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.840424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.845394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.845484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.845503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.850394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.850474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.850493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.854834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.854888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.854907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.859205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.859319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.859339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.863616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.863684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.863708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.868022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.868080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.868098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.872381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.872440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.872458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.876663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.876732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.876751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.078 [2024-11-27 05:49:43.881022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.078 [2024-11-27 05:49:43.881081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.078 [2024-11-27 05:49:43.881099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.885431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.885487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.885506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.889939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.889994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.890011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.894299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.894351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.894370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.898575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.898633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.898651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.902969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.903034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.903055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.907251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.907310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.907327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.911668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.911730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.911748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.916443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.916495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.916514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.921736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.921853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.921884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.927592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.927657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.927683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.932583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.932637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.932655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.937694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.937779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.937798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.942408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.942464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.942481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.946947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.947003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.947020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.951357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.951411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.951429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.956062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.956130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.956149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.960635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.960748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.960767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.965373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.965486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.965505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.970042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.970153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.970172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.974579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.974647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.974664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.979291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.979347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.979364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.983816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.983866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.983886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.988463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.988523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.988540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.993062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.993114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.993132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.079 [2024-11-27 05:49:43.997933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.079 [2024-11-27 05:49:43.997988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.079 [2024-11-27 05:49:43.998006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.002402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.002499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.002519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.006862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.006916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.006934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.011428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.011496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.011513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.016413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.016547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.016566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.021758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.021812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.021830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.026587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.026661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.026689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.031256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.031320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.031338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.036604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.036705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.036725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.042472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.042587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.047392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.047500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.047520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.052073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.052145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.052166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.056731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.056784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.056802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.061103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.061158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.061176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.065737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.065791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.065809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.070431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.070487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.070505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.080 [2024-11-27 05:49:44.075109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.080 [2024-11-27 05:49:44.075161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.080 [2024-11-27 05:49:44.075179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.079799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.079852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.079870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.084481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.084534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.084552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.089169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.089220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.089237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.093581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.093649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.093666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.098185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.098242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.098259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.102828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.102884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.102901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.107877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.107929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.107950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.113099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.113157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.113174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.117751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.117826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.122248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.122375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.122394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.340 [2024-11-27 05:49:44.127440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.340 [2024-11-27 05:49:44.127527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.340 [2024-11-27 05:49:44.127545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.132701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.132772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.132812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.137342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.137407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.137425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.141835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.141894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.141912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.146189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.146250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.146267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.150491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.150623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.150645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.155223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.155323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.155342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.159829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.159883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.159900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.164424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.164484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.164501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.168719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.168777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.168794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.173187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.173245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.173262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.177902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.177980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.177998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.182719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.182774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.182792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.187989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.188047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.188066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.193471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.193532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.193551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.198353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.198425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.198443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.203352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.203402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.203419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.208069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.208176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.208195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.212660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.212722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.212740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.217173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.217231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.217249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.221598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.221709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.221728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.226179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.226232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.226249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.230770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.230823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.230845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.235392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.235445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.235462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.239928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.239987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.240004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.244528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.244583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.244601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.249111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.249163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.249181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.253831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.341 [2024-11-27 05:49:44.253886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.341 [2024-11-27 05:49:44.253904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.341 [2024-11-27 05:49:44.258499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.258550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.258568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.263206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.263262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.267798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.267865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.267883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.272397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.272451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.272472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.276987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.277047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.277065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.281760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.281816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.281834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.286223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.286281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.286298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.290799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.290859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.290876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.295152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.295209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.295226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.299611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.299745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.299762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.304311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.304389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.304406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.309364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.309486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.309505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.314954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.315025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.315043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.320219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.320272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.320290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.325021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.325107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.325125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.329815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.329940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.329958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.334763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.334845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.334863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.342 [2024-11-27 05:49:44.340304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.342 [2024-11-27 05:49:44.340360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.342 [2024-11-27 05:49:44.340378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.345559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.345615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.345633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.350730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.350796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.350813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.355471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.355523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.355540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.360157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.360268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.360287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.364588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.364659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.364684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.370127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.370210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.370229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.374799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.374865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.374884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.379416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.379476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.379494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.383939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.383988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.384006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.388481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.388539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.388557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.393097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.393152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.393170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.397715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.397775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.397798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.402156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.402260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.402279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.406569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.406625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.406642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.411102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.411153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.411170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.415624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.415687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.415705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.420658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.420745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.420763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.426087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.426164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.426181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.431341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.431401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.431420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.437281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.437335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.437354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.442407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.442457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.442475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.447518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.447573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.447590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.452828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.452902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.452920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.458066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.458169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.458189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.603 [2024-11-27 05:49:44.462821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.603 [2024-11-27 05:49:44.462932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.603 [2024-11-27 05:49:44.462950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.467462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.467539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.467557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.472579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.472635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.472652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.479122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.479261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.479281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.486188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.486259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.486278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.493166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.493243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.493263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.498907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.498976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.498995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.504930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.504982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.505000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.511488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.511636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.511655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.518994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.519072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.519091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.525034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.525105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.525122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.530593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.530695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.530714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.535417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.535477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.535495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.539726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.539782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.539804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.544043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.544099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.544116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.548305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.548407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.548425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.552559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.552626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.552645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.556880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.557051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.557071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.561999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.562174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.562193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.568081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.568221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.568240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.574718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.574895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.574914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.580839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.581016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.581036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.586813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.587129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.587148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.592663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.592966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.592985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.604 [2024-11-27 05:49:44.598734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.604 [2024-11-27 05:49:44.599026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.604 [2024-11-27 05:49:44.599047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.865 [2024-11-27 05:49:44.604783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.865 [2024-11-27 05:49:44.605090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.865 [2024-11-27 05:49:44.605109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.865 [2024-11-27 05:49:44.610853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.865 [2024-11-27 05:49:44.611150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.865 [2024-11-27 05:49:44.611170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.865 [2024-11-27 05:49:44.616793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.865 [2024-11-27 05:49:44.617084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.865 [2024-11-27 05:49:44.617103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.865 [2024-11-27 05:49:44.622618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.865 [2024-11-27 05:49:44.622886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.865 [2024-11-27 05:49:44.622905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.865 [2024-11-27 05:49:44.627117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.865 [2024-11-27 05:49:44.627335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.865 [2024-11-27 05:49:44.627354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.865 [2024-11-27 05:49:44.631360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.865 [2024-11-27 05:49:44.631581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.865 [2024-11-27 05:49:44.631600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.865 [2024-11-27 05:49:44.635776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.865 [2024-11-27 05:49:44.636006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.636025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.640097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.640323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.640342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.644301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.644523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.644542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.648798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.649021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.649040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.653183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.653407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.653425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.866 6053.00 IOPS, 756.62 MiB/s [2024-11-27T04:49:44.870Z] [2024-11-27 05:49:44.658943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.659160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.663487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.663737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.663755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.667952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.668185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.668204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.672408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.672628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.672647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.676859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.677078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.677097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.681381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.681629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.685916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.686142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.686161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.690377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.690599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.690618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.694824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.695061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.695080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.699215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.699452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.699472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.703647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.703885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.703904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.708195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.708426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.708445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.712540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.712763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.712783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.717025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.717250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.717269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.721486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.721725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.721744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.725871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.726096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.726115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.730232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.730458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.734771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.734996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.735015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.739718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.739947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.739966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.744521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.744754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.744772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.749939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.750153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.750175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.754447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.866 [2024-11-27 05:49:44.754682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.866 [2024-11-27 05:49:44.754717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.866 [2024-11-27 05:49:44.758872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.759086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.759105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.763249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.763487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.763506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.768885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.769115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.769133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.773554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.773800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.773819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.779055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.779353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.779372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.784665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.784893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.784912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.789625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.789843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.789862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.794451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.794674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.794694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.799161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.799421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.799440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.803927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.804146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.804165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.808517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.808753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.808772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.813321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.813562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.813581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.818064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.818281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.818300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.822571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.822783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.822802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.827012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.827235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.827254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.831524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.831760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.831780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.835819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.836060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.836079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.841167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.841471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.841491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.846888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.847107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.847127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.852064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.852291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.852310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.857251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.857495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.857514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:56.867 [2024-11-27 05:49:44.862416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:56.867 [2024-11-27 05:49:44.862681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.867 [2024-11-27 05:49:44.862716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.867213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.867427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.867446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.872057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.872276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.872295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.876719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.876934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.876956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.881319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.881544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.881563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.885870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.886103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.886122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.890451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.890675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.890695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.895045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.895274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.895293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.899653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.899877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.899895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.904234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.904482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.904501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.909003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.909248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.909267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.913509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.913764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.913783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.917930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.918148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.918168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.922413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.922642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.922662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.928183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.928464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.928484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.933586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.933799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.933819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.938317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.938565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.938583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.943323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.943573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.943592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.948171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.948393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.948412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.128 [2024-11-27 05:49:44.953532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.128 [2024-11-27 05:49:44.953765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.128 [2024-11-27 05:49:44.953785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.958546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.958822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.958842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.963210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.963425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.963444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.967787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.968005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.968024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.972470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.972700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.972719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.977061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.977285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.977304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.982599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.982887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.982917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.988731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.989021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.989040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:44.994766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:44.994968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:44.994987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.000798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.001059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.001078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.007501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.007677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.007700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.012522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.012738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.012758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.017050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.017283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.017302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.021534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.021769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.021788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.025806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.026022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.026041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.030148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.030372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.030391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.034760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.034979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.034997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.039161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.039390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.039409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.043524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.043762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.043781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.047947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.048183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.048202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.052418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.052644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.052663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.056979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.057202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.057222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.061485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.061698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.061718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.065959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.066182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.066201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.070395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.070628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.070647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.074873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.075089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.075108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.079291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.079504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.079523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.083697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.129 [2024-11-27 05:49:45.083929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.129 [2024-11-27 05:49:45.083947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.129 [2024-11-27 05:49:45.087984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.088215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.088234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.092390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.092612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.092632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.096933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.097162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.097181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.101995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.102213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.102232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.106872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.107102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.107121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.111429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.111651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.111676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.116037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.116254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.116273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.120628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.120861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.120880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.130 [2024-11-27 05:49:45.125131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.130 [2024-11-27 05:49:45.125360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.130 [2024-11-27 05:49:45.125382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.129471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.129715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.129735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.134262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.134476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.134495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.138784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.139012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.139032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.143758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.143971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.143990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.149111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.149335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.149354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.153928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.154147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.154166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.158480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.158697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.158716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.162943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.163167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.163185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.167220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.167450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.391 [2024-11-27 05:49:45.167469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.391 [2024-11-27 05:49:45.171578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.391 [2024-11-27 05:49:45.171807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.171825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.176101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.176319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.176339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.181221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.181460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.181480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.186317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.186542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.186561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.191486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.191710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.191729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.196998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.197212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.197231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.202102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.202319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.202338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.206945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.207157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.207176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.211572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.211805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.211824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.216004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.216225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.216245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.220319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.220541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.220560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.224779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.225017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.225036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.229265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.229489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.229508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.233719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.233945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.233964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.238141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.238368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.238387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.242511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.242739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.242758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.246986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.247210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.247232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.251776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.252007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.252027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.256730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.256934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.256953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.262311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.262528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.262547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.267828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.268046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.268065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.274596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.274895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.274915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.281544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.281870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.281889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.288289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.288604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.288624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.295570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.295791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.295811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.302089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.302375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.302398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.308677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.308909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-11-27 05:49:45.308928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.392 [2024-11-27 05:49:45.315193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.392 [2024-11-27 05:49:45.315482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.315501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.322021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.322309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.322328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.328971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.329169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.329188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.335022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.335311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.335332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.341062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.341383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.341402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.347345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.347677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.347696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.353386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.353725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.353745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.360024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.360309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.360328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.367013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.367311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.367329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.373815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.374027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.374046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.379754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.379971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.379990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.393 [2024-11-27 05:49:45.385795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.393 [2024-11-27 05:49:45.386059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-11-27 05:49:45.386078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.652 [2024-11-27 05:49:45.392314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.652 [2024-11-27 05:49:45.392548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.392567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.398621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.398855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.398875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.403724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.403949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.403968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.409282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.409539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.409562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.415771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.416000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.416019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.420934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.421183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.421202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.425444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.425690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.425709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.430209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.430490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.430509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.436092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.436404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.436424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.441246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.441466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.441486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.446029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.446270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.446290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.450776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.451013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.451032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.455619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.455864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.455887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.460340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.460581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.460600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.465259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.465482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.465501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.470075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.470308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.470327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.474904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.475124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.475143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.479747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.479977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.479997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.484337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.484569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.484589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.489113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.489331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.489351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.493933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.494159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.494178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.499247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.499456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.499475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.504387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.504613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.504632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.510605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.510900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.510919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.516735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.516955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.516975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.522551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.522789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.522808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.528324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.528538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-11-27 05:49:45.528558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.653 [2024-11-27 05:49:45.535067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.653 [2024-11-27 05:49:45.535395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.535414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.542032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.542115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.542134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.548212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.548431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.548451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.553059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.553284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.553303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.558267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.558471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.558490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.563419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.563637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.563656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.568457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.568683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.568702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.573422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.573637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.573656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.578270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.578490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.578509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.583222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.583429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.583448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.587954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.588185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.588204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.593101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.593318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.593341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.598244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.598446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.598465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.603257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.603489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.603508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.608270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.608491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.608509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.613662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.613888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.613907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.618897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.618978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.618997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.623866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.624089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.624108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.628684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.628905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.628924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.634684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.634906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.634925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.639710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.639906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.639925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.644221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.644439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.644458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.648691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.648903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.648921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:57.654 [2024-11-27 05:49:45.652999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.654 [2024-11-27 05:49:45.653220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.654 [2024-11-27 05:49:45.653240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:57.914 6105.00 IOPS, 763.12 MiB/s [2024-11-27T04:49:45.918Z] [2024-11-27 05:49:45.658628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2260660) with pdu=0x200016eff3c8 00:27:57.914 [2024-11-27 05:49:45.658688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.914 [2024-11-27 05:49:45.658706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:57.914 00:27:57.914 Latency(us) 00:27:57.914 [2024-11-27T04:49:45.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.914 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:57.914 nvme0n1 : 2.00 6102.60 762.83 0.00 0.00 2617.50 1794.44 13419.28 00:27:57.914 [2024-11-27T04:49:45.918Z] =================================================================================================================== 00:27:57.914 [2024-11-27T04:49:45.918Z] Total : 6102.60 762.83 0.00 0.00 2617.50 1794.44 13419.28 00:27:57.914 { 00:27:57.914 "results": [ 00:27:57.914 { 00:27:57.914 "job": "nvme0n1", 00:27:57.914 "core_mask": "0x2", 00:27:57.914 "workload": "randwrite", 00:27:57.914 "status": "finished", 00:27:57.914 "queue_depth": 16, 00:27:57.914 "io_size": 131072, 00:27:57.914 "runtime": 2.003408, 00:27:57.914 "iops": 6102.601167610392, 00:27:57.914 "mibps": 762.825145951299, 00:27:57.914 "io_failed": 0, 00:27:57.914 "io_timeout": 0, 00:27:57.914 "avg_latency_us": 2617.5039498960064, 00:27:57.914 "min_latency_us": 1794.4380952380952, 00:27:57.914 "max_latency_us": 13419.27619047619 00:27:57.914 } 00:27:57.914 ], 00:27:57.914 "core_count": 1 00:27:57.914 } 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:57.914 | .driver_specific 00:27:57.914 | .nvme_error 00:27:57.914 | .status_code 00:27:57.914 | .command_transient_transport_error' 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 395 > 0 )) 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1912584 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1912584 ']' 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1912584 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.914 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1912584 00:27:58.173 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:58.173 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:58.173 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1912584' 00:27:58.173 killing process with pid 1912584 00:27:58.173 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1912584 00:27:58.173 Received shutdown signal, test time was about 2.000000 seconds 00:27:58.173 00:27:58.173 Latency(us) 00:27:58.173 [2024-11-27T04:49:46.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.173 [2024-11-27T04:49:46.177Z] =================================================================================================================== 00:27:58.173 [2024-11-27T04:49:46.177Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:58.173 05:49:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1912584 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1910859 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1910859 ']' 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1910859 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1910859 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1910859' 00:27:58.173 killing process with pid 1910859 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1910859 00:27:58.173 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1910859 00:27:58.431 00:27:58.431 real 0m13.850s 00:27:58.431 user 0m26.543s 00:27:58.431 sys 0m4.481s 00:27:58.431 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:58.431 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:58.431 ************************************ 00:27:58.431 END TEST nvmf_digest_error 00:27:58.431 ************************************ 00:27:58.431 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:58.431 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:58.431 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:58.431 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:58.431 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:58.432 rmmod nvme_tcp 00:27:58.432 rmmod nvme_fabrics 00:27:58.432 rmmod nvme_keyring 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1910859 ']' 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1910859 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1910859 ']' 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1910859 00:27:58.432 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1910859) - No such process 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1910859 is not found' 00:27:58.432 Process with pid 1910859 is not found 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.432 05:49:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:00.966 00:28:00.966 real 0m36.288s 00:28:00.966 user 0m55.307s 00:28:00.966 sys 0m13.634s 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:00.966 ************************************ 00:28:00.966 END TEST nvmf_digest 00:28:00.966 ************************************ 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.966 05:49:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.966 ************************************ 00:28:00.966 START TEST nvmf_bdevperf 00:28:00.966 ************************************ 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:00.967 * Looking for test storage... 00:28:00.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:00.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.967 --rc genhtml_branch_coverage=1 00:28:00.967 --rc genhtml_function_coverage=1 00:28:00.967 --rc genhtml_legend=1 00:28:00.967 --rc geninfo_all_blocks=1 00:28:00.967 --rc geninfo_unexecuted_blocks=1 00:28:00.967 00:28:00.967 ' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:00.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.967 --rc genhtml_branch_coverage=1 00:28:00.967 --rc genhtml_function_coverage=1 00:28:00.967 --rc genhtml_legend=1 00:28:00.967 --rc geninfo_all_blocks=1 00:28:00.967 --rc geninfo_unexecuted_blocks=1 00:28:00.967 00:28:00.967 ' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:00.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.967 --rc genhtml_branch_coverage=1 00:28:00.967 --rc genhtml_function_coverage=1 00:28:00.967 --rc genhtml_legend=1 00:28:00.967 --rc geninfo_all_blocks=1 00:28:00.967 --rc geninfo_unexecuted_blocks=1 00:28:00.967 00:28:00.967 ' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:00.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.967 --rc genhtml_branch_coverage=1 00:28:00.967 --rc genhtml_function_coverage=1 00:28:00.967 --rc genhtml_legend=1 00:28:00.967 --rc geninfo_all_blocks=1 00:28:00.967 --rc geninfo_unexecuted_blocks=1 00:28:00.967 00:28:00.967 ' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.967 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.968 05:49:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:07.662 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:07.662 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.662 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:07.663 Found net devices under 0000:86:00.0: cvl_0_0 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:07.663 Found net devices under 0000:86:00.1: cvl_0_1 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:07.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:28:07.663 00:28:07.663 --- 10.0.0.2 ping statistics --- 00:28:07.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.663 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:28:07.663 00:28:07.663 --- 10.0.0.1 ping statistics --- 00:28:07.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.663 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1916609 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1916609 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1916609 ']' 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.663 [2024-11-27 05:49:54.740675] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:28:07.663 [2024-11-27 05:49:54.740721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.663 [2024-11-27 05:49:54.811973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:07.663 [2024-11-27 05:49:54.865257] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.663 [2024-11-27 05:49:54.865304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.663 [2024-11-27 05:49:54.865316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.663 [2024-11-27 05:49:54.865340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.663 [2024-11-27 05:49:54.865349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.663 [2024-11-27 05:49:54.867175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.663 [2024-11-27 05:49:54.867280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.663 [2024-11-27 05:49:54.867282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:07.663 05:49:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.663 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.663 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.663 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.663 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.664 [2024-11-27 05:49:55.013153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.664 Malloc0 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:07.664 [2024-11-27 05:49:55.071779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:07.664 { 00:28:07.664 "params": { 00:28:07.664 "name": "Nvme$subsystem", 00:28:07.664 "trtype": "$TEST_TRANSPORT", 00:28:07.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.664 "adrfam": "ipv4", 00:28:07.664 "trsvcid": "$NVMF_PORT", 00:28:07.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.664 "hdgst": ${hdgst:-false}, 00:28:07.664 "ddgst": ${ddgst:-false} 00:28:07.664 }, 00:28:07.664 "method": "bdev_nvme_attach_controller" 00:28:07.664 } 00:28:07.664 EOF 00:28:07.664 )") 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:07.664 05:49:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:07.664 "params": { 00:28:07.664 "name": "Nvme1", 00:28:07.664 "trtype": "tcp", 00:28:07.664 "traddr": "10.0.0.2", 00:28:07.664 "adrfam": "ipv4", 00:28:07.664 "trsvcid": "4420", 00:28:07.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:07.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:07.664 "hdgst": false, 00:28:07.664 "ddgst": false 00:28:07.664 }, 00:28:07.664 "method": "bdev_nvme_attach_controller" 00:28:07.664 }' 00:28:07.664 [2024-11-27 05:49:55.124324] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:28:07.664 [2024-11-27 05:49:55.124367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916632 ] 00:28:07.664 [2024-11-27 05:49:55.198585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.664 [2024-11-27 05:49:55.239494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.664 Running I/O for 1 seconds... 00:28:08.602 11081.00 IOPS, 43.29 MiB/s 00:28:08.602 Latency(us) 00:28:08.602 [2024-11-27T04:49:56.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.602 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:08.602 Verification LBA range: start 0x0 length 0x4000 00:28:08.602 Nvme1n1 : 1.04 10735.17 41.93 0.00 0.00 11447.20 2090.91 42442.36 00:28:08.602 [2024-11-27T04:49:56.606Z] =================================================================================================================== 00:28:08.602 [2024-11-27T04:49:56.606Z] Total : 10735.17 41.93 0.00 0.00 11447.20 2090.91 42442.36 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1916868 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:08.862 { 00:28:08.862 "params": { 00:28:08.862 "name": "Nvme$subsystem", 00:28:08.862 "trtype": "$TEST_TRANSPORT", 00:28:08.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.862 "adrfam": "ipv4", 00:28:08.862 "trsvcid": "$NVMF_PORT", 00:28:08.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.862 "hdgst": ${hdgst:-false}, 00:28:08.862 "ddgst": ${ddgst:-false} 00:28:08.862 }, 00:28:08.862 "method": "bdev_nvme_attach_controller" 00:28:08.862 } 00:28:08.862 EOF 00:28:08.862 )") 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:08.862 05:49:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:08.862 "params": { 00:28:08.862 "name": "Nvme1", 00:28:08.862 "trtype": "tcp", 00:28:08.862 "traddr": "10.0.0.2", 00:28:08.862 "adrfam": "ipv4", 00:28:08.862 "trsvcid": "4420", 00:28:08.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.862 "hdgst": false, 00:28:08.862 "ddgst": false 00:28:08.862 }, 00:28:08.862 "method": "bdev_nvme_attach_controller" 00:28:08.862 }' 00:28:08.862 [2024-11-27 05:49:56.653369] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:28:08.862 [2024-11-27 05:49:56.653416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916868 ] 00:28:08.862 [2024-11-27 05:49:56.727650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.862 [2024-11-27 05:49:56.765468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.121 Running I/O for 15 seconds... 00:28:11.433 11263.00 IOPS, 44.00 MiB/s [2024-11-27T04:49:59.699Z] 11302.00 IOPS, 44.15 MiB/s [2024-11-27T04:49:59.699Z] 05:49:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1916609 00:28:11.695 05:49:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:11.695 [2024-11-27 05:49:59.622913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.695 [2024-11-27 05:49:59.622955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.622973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.695 [2024-11-27 05:49:59.622981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.622992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.695 [2024-11-27 05:49:59.623000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.695 [2024-11-27 05:49:59.623018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.695 [2024-11-27 05:49:59.623036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.695 [2024-11-27 05:49:59.623053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.695 [2024-11-27 05:49:59.623071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.695 [2024-11-27 05:49:59.623271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.695 [2024-11-27 05:49:59.623279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.696 [2024-11-27 05:49:59.623470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.696 [2024-11-27 05:49:59.623865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.696 [2024-11-27 05:49:59.623872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.623992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.623999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.697 [2024-11-27 05:49:59.624325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.697 [2024-11-27 05:49:59.624331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.698 [2024-11-27 05:49:59.624785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.698 [2024-11-27 05:49:59.624793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.699 [2024-11-27 05:49:59.624830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.624985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.624993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.625000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.625007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.625014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.625023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.625029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.625037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.625043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.625051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.699 [2024-11-27 05:49:59.625058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.625066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cd6c0 is same with the state(6) to be set 00:28:11.699 [2024-11-27 05:49:59.625075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:11.699 [2024-11-27 05:49:59.625080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:11.699 [2024-11-27 05:49:59.625086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:28:11.699 [2024-11-27 05:49:59.625095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.699 [2024-11-27 05:49:59.627948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.699 [2024-11-27 05:49:59.628002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.699 [2024-11-27 05:49:59.628470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.699 [2024-11-27 05:49:59.628485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.699 [2024-11-27 05:49:59.628493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.699 [2024-11-27 05:49:59.628668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.699 [2024-11-27 05:49:59.628848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.699 [2024-11-27 05:49:59.628861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.699 [2024-11-27 05:49:59.628870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.699 [2024-11-27 05:49:59.628879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.699 [2024-11-27 05:49:59.641276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.699 [2024-11-27 05:49:59.641564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.699 [2024-11-27 05:49:59.641581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.699 [2024-11-27 05:49:59.641589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.699 [2024-11-27 05:49:59.641766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.699 [2024-11-27 05:49:59.641934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.699 [2024-11-27 05:49:59.641942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.699 [2024-11-27 05:49:59.641948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.699 [2024-11-27 05:49:59.641954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.699 [2024-11-27 05:49:59.654161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.699 [2024-11-27 05:49:59.654504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.699 [2024-11-27 05:49:59.654521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.699 [2024-11-27 05:49:59.654528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.699 [2024-11-27 05:49:59.654705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.699 [2024-11-27 05:49:59.654875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.699 [2024-11-27 05:49:59.654882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.700 [2024-11-27 05:49:59.654889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.700 [2024-11-27 05:49:59.654895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.700 [2024-11-27 05:49:59.667136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.700 [2024-11-27 05:49:59.667529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.700 [2024-11-27 05:49:59.667545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.700 [2024-11-27 05:49:59.667552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.700 [2024-11-27 05:49:59.667725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.700 [2024-11-27 05:49:59.667893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.700 [2024-11-27 05:49:59.667901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.700 [2024-11-27 05:49:59.667907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.700 [2024-11-27 05:49:59.667913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.700 [2024-11-27 05:49:59.680040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.700 [2024-11-27 05:49:59.680316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.700 [2024-11-27 05:49:59.680332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.700 [2024-11-27 05:49:59.680342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.700 [2024-11-27 05:49:59.680511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.700 [2024-11-27 05:49:59.680686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.700 [2024-11-27 05:49:59.680694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.700 [2024-11-27 05:49:59.680700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.700 [2024-11-27 05:49:59.680706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.700 [2024-11-27 05:49:59.693033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.700 [2024-11-27 05:49:59.693368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.700 [2024-11-27 05:49:59.693385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.700 [2024-11-27 05:49:59.693392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.700 [2024-11-27 05:49:59.693565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.700 [2024-11-27 05:49:59.693745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.700 [2024-11-27 05:49:59.693753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.700 [2024-11-27 05:49:59.693760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.700 [2024-11-27 05:49:59.693766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.961 [2024-11-27 05:49:59.706221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.961 [2024-11-27 05:49:59.706514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.961 [2024-11-27 05:49:59.706531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.961 [2024-11-27 05:49:59.706539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.961 [2024-11-27 05:49:59.706730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.961 [2024-11-27 05:49:59.706937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.961 [2024-11-27 05:49:59.706946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.961 [2024-11-27 05:49:59.706953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.961 [2024-11-27 05:49:59.706959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.961 [2024-11-27 05:49:59.719430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.961 [2024-11-27 05:49:59.719871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.961 [2024-11-27 05:49:59.719889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.961 [2024-11-27 05:49:59.719896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.961 [2024-11-27 05:49:59.720080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.961 [2024-11-27 05:49:59.720294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.961 [2024-11-27 05:49:59.720303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.961 [2024-11-27 05:49:59.720310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.961 [2024-11-27 05:49:59.720317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.961 [2024-11-27 05:49:59.732679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.961 [2024-11-27 05:49:59.733106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.961 [2024-11-27 05:49:59.733124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.961 [2024-11-27 05:49:59.733132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.961 [2024-11-27 05:49:59.733328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.733523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.733532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.733540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.733546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.745840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.746229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.746246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.746253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.746436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.746620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.746629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.746636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.746642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.759033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.759484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.759501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.759509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.759699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.759884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.759892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.759903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.759909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.772085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.772436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.772453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.772460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.772633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.772810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.772819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.772825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.772832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.785209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.785634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.785650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.785657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.785835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.786007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.786015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.786022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.786028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.798291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.798685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.798702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.798710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.798883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.799054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.799062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.799068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.799075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.811264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.811688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.811733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.811757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.812277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.812446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.812454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.812460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.812466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.824197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.824593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.824609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.824616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.824790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.824958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.824966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.824972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.824978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.836988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.837353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.837397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.837419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.838016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.838241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.838249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.838255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.838261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.849829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.850126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.850142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.962 [2024-11-27 05:49:59.850152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.962 [2024-11-27 05:49:59.850320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.962 [2024-11-27 05:49:59.850489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.962 [2024-11-27 05:49:59.850496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.962 [2024-11-27 05:49:59.850502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.962 [2024-11-27 05:49:59.850509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.962 [2024-11-27 05:49:59.862685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.962 [2024-11-27 05:49:59.863026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.962 [2024-11-27 05:49:59.863042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.863049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.863217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.863384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.863392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.863398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.863404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.963 [2024-11-27 05:49:59.875629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.963 [2024-11-27 05:49:59.875988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.963 [2024-11-27 05:49:59.876005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.876012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.876180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.876348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.876357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.876364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.876370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.963 [2024-11-27 05:49:59.888714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.963 [2024-11-27 05:49:59.889000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.963 [2024-11-27 05:49:59.889016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.889024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.889198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.889373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.889382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.889389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.889396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.963 [2024-11-27 05:49:59.901740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.963 [2024-11-27 05:49:59.902106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.963 [2024-11-27 05:49:59.902151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.902173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.902649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.902830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.902840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.902847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.902853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.963 [2024-11-27 05:49:59.914713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.963 [2024-11-27 05:49:59.915045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.963 [2024-11-27 05:49:59.915061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.915068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.915235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.915403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.915411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.915417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.915423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.963 [2024-11-27 05:49:59.927668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.963 [2024-11-27 05:49:59.928054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.963 [2024-11-27 05:49:59.928071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.928078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.928246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.928414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.928422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.928431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.928437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.963 [2024-11-27 05:49:59.940551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.963 [2024-11-27 05:49:59.940889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.963 [2024-11-27 05:49:59.940906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.940912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.941080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.941247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.941255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.941261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.941267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:11.963 [2024-11-27 05:49:59.953560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:11.963 [2024-11-27 05:49:59.953967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.963 [2024-11-27 05:49:59.953985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:11.963 [2024-11-27 05:49:59.953993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:11.963 [2024-11-27 05:49:59.954160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:11.963 [2024-11-27 05:49:59.954328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:11.963 [2024-11-27 05:49:59.954336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:11.963 [2024-11-27 05:49:59.954342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:11.963 [2024-11-27 05:49:59.954348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:49:59.966497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:49:59.966920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:49:59.966937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:49:59.966945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:49:59.967112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:49:59.967280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:49:59.967288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:49:59.967294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:49:59.967301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:49:59.979345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:49:59.979779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:49:59.979795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:49:59.979802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:49:59.979969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:49:59.980137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:49:59.980145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:49:59.980151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:49:59.980157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:49:59.992291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:49:59.992623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:49:59.992639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:49:59.992646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:49:59.992819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:49:59.992987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:49:59.992994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:49:59.993001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:49:59.993007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:50:00.005438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:50:00.005792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:50:00.005809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:50:00.005816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:50:00.005989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:50:00.006162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:50:00.006169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:50:00.006176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:50:00.006182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:50:00.019169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:50:00.019615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:50:00.019633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:50:00.019645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:50:00.019827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:50:00.020002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:50:00.020011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:50:00.020017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:50:00.020024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:50:00.031933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:50:00.032374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:50:00.032392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:50:00.032399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:50:00.032567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:50:00.032742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:50:00.032751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:50:00.032757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:50:00.032764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:50:00.045484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:50:00.046119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:50:00.046215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:50:00.046252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:50:00.046682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:50:00.046913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:50:00.046924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:50:00.046933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:50:00.046940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 9652.67 IOPS, 37.71 MiB/s [2024-11-27T04:50:00.228Z] [2024-11-27 05:50:00.058681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:50:00.059091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:50:00.059109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:50:00.059116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:50:00.059289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:50:00.059467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:50:00.059475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:50:00.059481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:50:00.059487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:50:00.071739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:50:00.072144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:50:00.072161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:50:00.072168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:50:00.072341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:50:00.072515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.224 [2024-11-27 05:50:00.072523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.224 [2024-11-27 05:50:00.072529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.224 [2024-11-27 05:50:00.072535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.224 [2024-11-27 05:50:00.084737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.224 [2024-11-27 05:50:00.085159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.224 [2024-11-27 05:50:00.085176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.224 [2024-11-27 05:50:00.085183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.224 [2024-11-27 05:50:00.085366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.224 [2024-11-27 05:50:00.085534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.085542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.085549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.085555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.098322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.098750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.098767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.098775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.098967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.099141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.099149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.099159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.099166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.111252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.111683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.111730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.111753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.112210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.112378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.112386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.112392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.112398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.124171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.124576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.124620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.124642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.125131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.125305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.125313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.125320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.125326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.137098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.137464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.137481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.137488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.137661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.137840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.137849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.137855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.137862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.150184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.150609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.150625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.150632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.150810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.150984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.150991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.150998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.151004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.163214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.163620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.163665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.163703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.164194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.164368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.164376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.164382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.164388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.176138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.176530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.176547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.176554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.176745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.176918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.176926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.176932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.176938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.189158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.189486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.189502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.189516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.189691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.189881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.189889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.189895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.189902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.202094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.202506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.202522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.202529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.225 [2024-11-27 05:50:00.202753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.225 [2024-11-27 05:50:00.202929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.225 [2024-11-27 05:50:00.202937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.225 [2024-11-27 05:50:00.202943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.225 [2024-11-27 05:50:00.202949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.225 [2024-11-27 05:50:00.215005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.225 [2024-11-27 05:50:00.215402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.225 [2024-11-27 05:50:00.215419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.225 [2024-11-27 05:50:00.215425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.226 [2024-11-27 05:50:00.215593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.226 [2024-11-27 05:50:00.215784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.226 [2024-11-27 05:50:00.215792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.226 [2024-11-27 05:50:00.215799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.226 [2024-11-27 05:50:00.215805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.227923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.228358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.228401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.228424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.229025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.229421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.229429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.229435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.229441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.240901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.241301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.241317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.241324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.241492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.241663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.241677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.241683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.241690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.253797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.254155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.254172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.254179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.254346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.254513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.254521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.254527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.254533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.266752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.267129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.267144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.267151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.267318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.267486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.267494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.267503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.267510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.279549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.279964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.279981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.279987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.280155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.280322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.280330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.280336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.280342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.292473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.292878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.292895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.292902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.293070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.293236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.293244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.293250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.293256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.305454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.305855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.305872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.305879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.306046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.306214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.306221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.306227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.306233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.487 [2024-11-27 05:50:00.318378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.487 [2024-11-27 05:50:00.318792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.487 [2024-11-27 05:50:00.318809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.487 [2024-11-27 05:50:00.318815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.487 [2024-11-27 05:50:00.318983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.487 [2024-11-27 05:50:00.319150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.487 [2024-11-27 05:50:00.319157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.487 [2024-11-27 05:50:00.319163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.487 [2024-11-27 05:50:00.319169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.331184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.331579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.331621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.331644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.332240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.332683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.332691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.332697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.332703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.343980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.344395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.344411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.344418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.344586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.344764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.344772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.344779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.344785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.356743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.357118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.357134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.357143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.357312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.357479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.357487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.357493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.357499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.369680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.370077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.370094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.370100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.370269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.370437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.370445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.370451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.370457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.382433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.382800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.382816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.382823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.382991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.383163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.383171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.383177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.383183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.395338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.395782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.395827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.395849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.396432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.396699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.396708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.396715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.396721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.408358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.408761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.408779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.408786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.408958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.409131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.409138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.409145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.409151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.421366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.421746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.421763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.421770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.421938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.422105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.422113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.422119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.422125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.434248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.434653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.434709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.434732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.435315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.435889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.435898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.488 [2024-11-27 05:50:00.435908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.488 [2024-11-27 05:50:00.435914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.488 [2024-11-27 05:50:00.447185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.488 [2024-11-27 05:50:00.447534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.488 [2024-11-27 05:50:00.447550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.488 [2024-11-27 05:50:00.447557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.488 [2024-11-27 05:50:00.447747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.488 [2024-11-27 05:50:00.447919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.488 [2024-11-27 05:50:00.447927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.489 [2024-11-27 05:50:00.447934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.489 [2024-11-27 05:50:00.447940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.489 [2024-11-27 05:50:00.460159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.489 [2024-11-27 05:50:00.460538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.489 [2024-11-27 05:50:00.460555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.489 [2024-11-27 05:50:00.460561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.489 [2024-11-27 05:50:00.460753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.489 [2024-11-27 05:50:00.460926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.489 [2024-11-27 05:50:00.460933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.489 [2024-11-27 05:50:00.460940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.489 [2024-11-27 05:50:00.460946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.489 [2024-11-27 05:50:00.472979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.489 [2024-11-27 05:50:00.473410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.489 [2024-11-27 05:50:00.473455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.489 [2024-11-27 05:50:00.473478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.489 [2024-11-27 05:50:00.473875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.489 [2024-11-27 05:50:00.474049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.489 [2024-11-27 05:50:00.474057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.489 [2024-11-27 05:50:00.474063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.489 [2024-11-27 05:50:00.474069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.489 [2024-11-27 05:50:00.485964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.489 [2024-11-27 05:50:00.486323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.489 [2024-11-27 05:50:00.486366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.489 [2024-11-27 05:50:00.486388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.489 [2024-11-27 05:50:00.486894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.489 [2024-11-27 05:50:00.487067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.489 [2024-11-27 05:50:00.487075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.489 [2024-11-27 05:50:00.487081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.489 [2024-11-27 05:50:00.487087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.750 [2024-11-27 05:50:00.498852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.750 [2024-11-27 05:50:00.499249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.750 [2024-11-27 05:50:00.499266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.750 [2024-11-27 05:50:00.499273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.750 [2024-11-27 05:50:00.499441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.750 [2024-11-27 05:50:00.499609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.750 [2024-11-27 05:50:00.499616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.750 [2024-11-27 05:50:00.499623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.750 [2024-11-27 05:50:00.499629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.750 [2024-11-27 05:50:00.511746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.750 [2024-11-27 05:50:00.512152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.750 [2024-11-27 05:50:00.512195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.750 [2024-11-27 05:50:00.512218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.750 [2024-11-27 05:50:00.512821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.750 [2024-11-27 05:50:00.512989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.750 [2024-11-27 05:50:00.512996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.750 [2024-11-27 05:50:00.513002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.750 [2024-11-27 05:50:00.513008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.750 [2024-11-27 05:50:00.524643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.750 [2024-11-27 05:50:00.525046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.750 [2024-11-27 05:50:00.525062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.750 [2024-11-27 05:50:00.525072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.750 [2024-11-27 05:50:00.525664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.750 [2024-11-27 05:50:00.525859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.750 [2024-11-27 05:50:00.525867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.750 [2024-11-27 05:50:00.525874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.750 [2024-11-27 05:50:00.525880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.750 [2024-11-27 05:50:00.537547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.750 [2024-11-27 05:50:00.537951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.750 [2024-11-27 05:50:00.537968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.750 [2024-11-27 05:50:00.537975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.750 [2024-11-27 05:50:00.538142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.750 [2024-11-27 05:50:00.538310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.750 [2024-11-27 05:50:00.538317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.750 [2024-11-27 05:50:00.538323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.750 [2024-11-27 05:50:00.538329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.750 [2024-11-27 05:50:00.550447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.750 [2024-11-27 05:50:00.550880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.750 [2024-11-27 05:50:00.550897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.750 [2024-11-27 05:50:00.550905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.750 [2024-11-27 05:50:00.551077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.750 [2024-11-27 05:50:00.551251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.750 [2024-11-27 05:50:00.551259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.750 [2024-11-27 05:50:00.551265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.750 [2024-11-27 05:50:00.551271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.750 [2024-11-27 05:50:00.563347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.750 [2024-11-27 05:50:00.563753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.750 [2024-11-27 05:50:00.563770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.750 [2024-11-27 05:50:00.563776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.750 [2024-11-27 05:50:00.563943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.750 [2024-11-27 05:50:00.564114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.750 [2024-11-27 05:50:00.564122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.750 [2024-11-27 05:50:00.564128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.750 [2024-11-27 05:50:00.564134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.750 [2024-11-27 05:50:00.576124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.750 [2024-11-27 05:50:00.576533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.750 [2024-11-27 05:50:00.576549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.750 [2024-11-27 05:50:00.576556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.750 [2024-11-27 05:50:00.576746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.750 [2024-11-27 05:50:00.576918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.750 [2024-11-27 05:50:00.576926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.576933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.576939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.589149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.589563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.589579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.589586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.589778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.589952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.589960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.589966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.589972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.601962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.602384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.602400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.602406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.602565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.602747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.602755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.602765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.602771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.614927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.615348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.615365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.615372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.615539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.615712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.615720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.615726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.615733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.627933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.628341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.628357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.628364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.628532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.628727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.628736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.628743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.628749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.640866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.641271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.641287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.641294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.641462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.641629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.641636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.641642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.641648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.653765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.654192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.654208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.654215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.654383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.654550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.654558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.654564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.654571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.666922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.667347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.667391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.667413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.668012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.668471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.668478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.668485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.668491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.679893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.680295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.680311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.680318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.680486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.680653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.680660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.680666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.680679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.692875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.693298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.693326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.693495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.751 [2024-11-27 05:50:00.693667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.751 [2024-11-27 05:50:00.693682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.751 [2024-11-27 05:50:00.693689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.751 [2024-11-27 05:50:00.693695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.751 [2024-11-27 05:50:00.705955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.751 [2024-11-27 05:50:00.706352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.751 [2024-11-27 05:50:00.706369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.751 [2024-11-27 05:50:00.706376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.751 [2024-11-27 05:50:00.706544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.752 [2024-11-27 05:50:00.706717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.752 [2024-11-27 05:50:00.706726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.752 [2024-11-27 05:50:00.706732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.752 [2024-11-27 05:50:00.706739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.752 [2024-11-27 05:50:00.718862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.752 [2024-11-27 05:50:00.719211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.752 [2024-11-27 05:50:00.719254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.752 [2024-11-27 05:50:00.719277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.752 [2024-11-27 05:50:00.719874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.752 [2024-11-27 05:50:00.720452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.752 [2024-11-27 05:50:00.720460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.752 [2024-11-27 05:50:00.720467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.752 [2024-11-27 05:50:00.720473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.752 [2024-11-27 05:50:00.731802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.752 [2024-11-27 05:50:00.732208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.752 [2024-11-27 05:50:00.732225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.752 [2024-11-27 05:50:00.732232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.752 [2024-11-27 05:50:00.732404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.752 [2024-11-27 05:50:00.732581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.752 [2024-11-27 05:50:00.732589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.752 [2024-11-27 05:50:00.732595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.752 [2024-11-27 05:50:00.732601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:12.752 [2024-11-27 05:50:00.744689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:12.752 [2024-11-27 05:50:00.745114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.752 [2024-11-27 05:50:00.745158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:12.752 [2024-11-27 05:50:00.745181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:12.752 [2024-11-27 05:50:00.745603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:12.752 [2024-11-27 05:50:00.745782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:12.752 [2024-11-27 05:50:00.745791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:12.752 [2024-11-27 05:50:00.745797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:12.752 [2024-11-27 05:50:00.745803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.757727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.758115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.758160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.758183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.758703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.758893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.758902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.758908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.758914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.770666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.770998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.771014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.771021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.771189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.771356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.771364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.771376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.771383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.783750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.784156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.784172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.784179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.784352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.784524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.784532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.784539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.784545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.796787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.797202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.797217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.797224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.797398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.797571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.797579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.797586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.797593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.809839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.810257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.810264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.810436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.810612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.810620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.810626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.810633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.822868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.823202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.823218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.823225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.823397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.823573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.823581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.823587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.823593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.835749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.836130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.836147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.836154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.836323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.836490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.836497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.836504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.836510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.848535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.848966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.848983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.848990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.849157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.849324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.849332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.849338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.849344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.013 [2024-11-27 05:50:00.861357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.013 [2024-11-27 05:50:00.861814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.013 [2024-11-27 05:50:00.861859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.013 [2024-11-27 05:50:00.861889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.013 [2024-11-27 05:50:00.862472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.013 [2024-11-27 05:50:00.862920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.013 [2024-11-27 05:50:00.862928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.013 [2024-11-27 05:50:00.862934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.013 [2024-11-27 05:50:00.862940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.874132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.874526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.874542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.874548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.874729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.874897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.874904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.874911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.874917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.886902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.887332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.887376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.887399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.887939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.888330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.888347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.888361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.888374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.902175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.902701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.902722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.902732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.902986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.903243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.903255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.903264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.903273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.915159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.915584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.915600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.915608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.915781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.915950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.915958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.915965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.915972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.928253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.928681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.928698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.928705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.928878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.929051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.929059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.929067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.929074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.940982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.941425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.941463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.941487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.942041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.942210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.942218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.942227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.942234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.954052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.954404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.954421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.954427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.954587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.954769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.954778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.954784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.954790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.966932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.967330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.967347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.967354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.967521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.967694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.967703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.967709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.967715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.979750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.980117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.980162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.980185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.980783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.981228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.981237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.981243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.014 [2024-11-27 05:50:00.981249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.014 [2024-11-27 05:50:00.992603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.014 [2024-11-27 05:50:00.993056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.014 [2024-11-27 05:50:00.993102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.014 [2024-11-27 05:50:00.993126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.014 [2024-11-27 05:50:00.993718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.014 [2024-11-27 05:50:00.994191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.014 [2024-11-27 05:50:00.994199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.014 [2024-11-27 05:50:00.994206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.015 [2024-11-27 05:50:00.994212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.015 [2024-11-27 05:50:01.005529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.015 [2024-11-27 05:50:01.005852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.015 [2024-11-27 05:50:01.005868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.015 [2024-11-27 05:50:01.005876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.015 [2024-11-27 05:50:01.006044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.015 [2024-11-27 05:50:01.006212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.015 [2024-11-27 05:50:01.006220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.015 [2024-11-27 05:50:01.006226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.015 [2024-11-27 05:50:01.006232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.275 [2024-11-27 05:50:01.018499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.275 [2024-11-27 05:50:01.018933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.275 [2024-11-27 05:50:01.018950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.275 [2024-11-27 05:50:01.018957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.275 [2024-11-27 05:50:01.019131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.275 [2024-11-27 05:50:01.019306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.275 [2024-11-27 05:50:01.019315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.275 [2024-11-27 05:50:01.019321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.275 [2024-11-27 05:50:01.019327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.275 [2024-11-27 05:50:01.031557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.275 [2024-11-27 05:50:01.031992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.275 [2024-11-27 05:50:01.032009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.275 [2024-11-27 05:50:01.032019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.275 [2024-11-27 05:50:01.032191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.275 [2024-11-27 05:50:01.032365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.275 [2024-11-27 05:50:01.032373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.275 [2024-11-27 05:50:01.032379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.275 [2024-11-27 05:50:01.032385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.275 [2024-11-27 05:50:01.044679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.275 [2024-11-27 05:50:01.045054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.275 [2024-11-27 05:50:01.045071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.275 [2024-11-27 05:50:01.045078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.275 [2024-11-27 05:50:01.045251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.275 [2024-11-27 05:50:01.045424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.275 [2024-11-27 05:50:01.045433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.275 [2024-11-27 05:50:01.045439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.275 [2024-11-27 05:50:01.045445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.275 7239.50 IOPS, 28.28 MiB/s [2024-11-27T04:50:01.279Z] [2024-11-27 05:50:01.059255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.275 [2024-11-27 05:50:01.059682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.275 [2024-11-27 05:50:01.059700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.275 [2024-11-27 05:50:01.059708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.275 [2024-11-27 05:50:01.059891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.275 [2024-11-27 05:50:01.060075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.275 [2024-11-27 05:50:01.060083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.275 [2024-11-27 05:50:01.060090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.275 [2024-11-27 05:50:01.060097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.275 [2024-11-27 05:50:01.072256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.275 [2024-11-27 05:50:01.072683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.275 [2024-11-27 05:50:01.072700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.275 [2024-11-27 05:50:01.072708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.275 [2024-11-27 05:50:01.072891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.275 [2024-11-27 05:50:01.073079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.275 [2024-11-27 05:50:01.073087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.275 [2024-11-27 05:50:01.073094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.275 [2024-11-27 05:50:01.073101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.275 [2024-11-27 05:50:01.085485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.275 [2024-11-27 05:50:01.085937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.275 [2024-11-27 05:50:01.085954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.275 [2024-11-27 05:50:01.085962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.275 [2024-11-27 05:50:01.086146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.275 [2024-11-27 05:50:01.086330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.275 [2024-11-27 05:50:01.086338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.275 [2024-11-27 05:50:01.086345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.275 [2024-11-27 05:50:01.086352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.275 [2024-11-27 05:50:01.098491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.098927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.098944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.098951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.099124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.099296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.099304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.099310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.099317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.111720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.112114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.112131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.112138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.112323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.112506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.112515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.112525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.112531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.124883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.125219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.125237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.125244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.125427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.125611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.125620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.125626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.125633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.138063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.138495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.138512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.138519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.138713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.138898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.138912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.138919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.138925] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.151132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.151567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.151583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.151590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.151768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.151941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.151949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.151955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.151961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.164172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.164601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.164618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.164625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.164803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.164976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.164984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.164990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.164997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.177186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.177625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.177642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.177649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.177838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.178023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.178031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.178039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.178046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.190462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.190856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.190874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.190882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.191078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.191274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.191283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.191291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.191298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.203543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.203890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.203908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.203918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.204092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.204265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.276 [2024-11-27 05:50:01.204273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.276 [2024-11-27 05:50:01.204279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.276 [2024-11-27 05:50:01.204286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.276 [2024-11-27 05:50:01.216658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.276 [2024-11-27 05:50:01.217086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.276 [2024-11-27 05:50:01.217102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.276 [2024-11-27 05:50:01.217110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.276 [2024-11-27 05:50:01.217282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.276 [2024-11-27 05:50:01.217455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.277 [2024-11-27 05:50:01.217462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.277 [2024-11-27 05:50:01.217469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.277 [2024-11-27 05:50:01.217475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.277 [2024-11-27 05:50:01.229923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.277 [2024-11-27 05:50:01.230363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.277 [2024-11-27 05:50:01.230380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.277 [2024-11-27 05:50:01.230388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.277 [2024-11-27 05:50:01.230598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.277 [2024-11-27 05:50:01.230799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.277 [2024-11-27 05:50:01.230808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.277 [2024-11-27 05:50:01.230816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.277 [2024-11-27 05:50:01.230824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.277 [2024-11-27 05:50:01.242943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.277 [2024-11-27 05:50:01.243351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.277 [2024-11-27 05:50:01.243367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.277 [2024-11-27 05:50:01.243374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.277 [2024-11-27 05:50:01.243546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.277 [2024-11-27 05:50:01.243748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.277 [2024-11-27 05:50:01.243758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.277 [2024-11-27 05:50:01.243764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.277 [2024-11-27 05:50:01.243771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.277 [2024-11-27 05:50:01.256286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.277 [2024-11-27 05:50:01.256729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.277 [2024-11-27 05:50:01.256748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.277 [2024-11-27 05:50:01.256756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.277 [2024-11-27 05:50:01.256952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.277 [2024-11-27 05:50:01.257148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.277 [2024-11-27 05:50:01.257156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.277 [2024-11-27 05:50:01.257164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.277 [2024-11-27 05:50:01.257171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.277 [2024-11-27 05:50:01.269625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.277 [2024-11-27 05:50:01.270074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.277 [2024-11-27 05:50:01.270092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.277 [2024-11-27 05:50:01.270099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.277 [2024-11-27 05:50:01.270282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.277 [2024-11-27 05:50:01.270465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.277 [2024-11-27 05:50:01.270474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.277 [2024-11-27 05:50:01.270480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.277 [2024-11-27 05:50:01.270487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.537 [2024-11-27 05:50:01.282705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.537 [2024-11-27 05:50:01.283124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.537 [2024-11-27 05:50:01.283141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.537 [2024-11-27 05:50:01.283148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.537 [2024-11-27 05:50:01.283332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.537 [2024-11-27 05:50:01.283516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.537 [2024-11-27 05:50:01.283524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.537 [2024-11-27 05:50:01.283534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.537 [2024-11-27 05:50:01.283541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.537 [2024-11-27 05:50:01.296045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.537 [2024-11-27 05:50:01.296398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.537 [2024-11-27 05:50:01.296415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.537 [2024-11-27 05:50:01.296422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.537 [2024-11-27 05:50:01.296606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.537 [2024-11-27 05:50:01.296795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.537 [2024-11-27 05:50:01.296804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.537 [2024-11-27 05:50:01.296811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.537 [2024-11-27 05:50:01.296817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.537 [2024-11-27 05:50:01.309295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.537 [2024-11-27 05:50:01.309715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.537 [2024-11-27 05:50:01.309733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.537 [2024-11-27 05:50:01.309741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.537 [2024-11-27 05:50:01.309931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.537 [2024-11-27 05:50:01.310105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.537 [2024-11-27 05:50:01.310113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.537 [2024-11-27 05:50:01.310120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.310127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.322489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.322909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.322927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.322935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.323118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.323302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.323311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.323318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.323324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.335641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.336069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.336086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.336094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.336277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.336460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.336469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.336476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.336483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.348817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.349263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.349280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.349288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.349471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.349655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.349663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.349676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.349683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.361860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.362292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.362308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.362316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.362488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.362661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.362672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.362680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.362686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.374902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.375250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.375267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.375277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.375461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.375644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.375652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.375659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.375666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.387980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.388376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.388394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.388401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.388585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.388774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.388782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.388789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.388796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.401071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.401421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.401437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.401444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.401616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.401793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.401801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.401808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.401814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.414185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.414516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.414532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.414539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.414728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.414904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.414912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.414918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.414924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.426969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.427379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.427394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.427401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.427560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.427801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.538 [2024-11-27 05:50:01.427810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.538 [2024-11-27 05:50:01.427817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.538 [2024-11-27 05:50:01.427823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.538 [2024-11-27 05:50:01.439965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.538 [2024-11-27 05:50:01.440403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.538 [2024-11-27 05:50:01.440447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.538 [2024-11-27 05:50:01.440470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.538 [2024-11-27 05:50:01.441067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.538 [2024-11-27 05:50:01.441291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.441299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.441306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.441312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.539 [2024-11-27 05:50:01.452907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.539 [2024-11-27 05:50:01.453337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.539 [2024-11-27 05:50:01.453382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.539 [2024-11-27 05:50:01.453405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.539 [2024-11-27 05:50:01.454002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.539 [2024-11-27 05:50:01.454220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.454227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.454240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.454247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.539 [2024-11-27 05:50:01.465692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.539 [2024-11-27 05:50:01.466121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.539 [2024-11-27 05:50:01.466167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.539 [2024-11-27 05:50:01.466190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.539 [2024-11-27 05:50:01.466786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.539 [2024-11-27 05:50:01.467330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.467337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.467344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.467350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.539 [2024-11-27 05:50:01.478439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.539 [2024-11-27 05:50:01.478879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.539 [2024-11-27 05:50:01.478896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.539 [2024-11-27 05:50:01.478902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.539 [2024-11-27 05:50:01.479070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.539 [2024-11-27 05:50:01.479237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.479245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.479251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.479257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.539 [2024-11-27 05:50:01.491307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.539 [2024-11-27 05:50:01.491721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.539 [2024-11-27 05:50:01.491769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.539 [2024-11-27 05:50:01.491792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.539 [2024-11-27 05:50:01.492310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.539 [2024-11-27 05:50:01.492469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.492477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.492482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.492488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.539 [2024-11-27 05:50:01.504097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.539 [2024-11-27 05:50:01.504513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.539 [2024-11-27 05:50:01.504529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.539 [2024-11-27 05:50:01.504536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.539 [2024-11-27 05:50:01.504725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.539 [2024-11-27 05:50:01.504900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.504908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.504914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.504921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.539 [2024-11-27 05:50:01.517056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.539 [2024-11-27 05:50:01.517406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.539 [2024-11-27 05:50:01.517422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.539 [2024-11-27 05:50:01.517429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.539 [2024-11-27 05:50:01.517597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.539 [2024-11-27 05:50:01.517769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.517777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.517784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.517790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.539 [2024-11-27 05:50:01.529829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.539 [2024-11-27 05:50:01.530179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.539 [2024-11-27 05:50:01.530195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.539 [2024-11-27 05:50:01.530202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.539 [2024-11-27 05:50:01.530369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.539 [2024-11-27 05:50:01.530536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.539 [2024-11-27 05:50:01.530544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.539 [2024-11-27 05:50:01.530550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.539 [2024-11-27 05:50:01.530556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.542604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.542976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.542992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.543002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.543170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.543339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.543346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.543352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.543358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.555370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.555760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.555776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.555783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.555942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.556100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.556107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.556113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.556119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.568169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.568598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.568642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.568665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.569263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.569681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.569689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.569696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.569702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.580943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.581359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.581375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.581381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.581540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.581723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.581731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.581738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.581744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.593712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.594056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.594072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.594079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.594238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.594397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.594404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.594410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.594416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.606564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.606923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.606958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.606983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.607565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.607737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.607745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.607751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.607757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.619314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.619842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.619859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.619867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.620035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.620207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.620215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.620224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.620230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.632067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.632486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.632501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.632507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.632665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.632854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.798 [2024-11-27 05:50:01.632863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.798 [2024-11-27 05:50:01.632870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.798 [2024-11-27 05:50:01.632875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.798 [2024-11-27 05:50:01.644890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.798 [2024-11-27 05:50:01.645304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.798 [2024-11-27 05:50:01.645351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.798 [2024-11-27 05:50:01.645374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.798 [2024-11-27 05:50:01.645897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.798 [2024-11-27 05:50:01.646071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.646080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.646086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.646092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.657656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.657970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.657987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.657993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.658151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.658310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.658317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.658323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.658329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.670461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.670795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.670841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.670864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.671446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.671660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.671668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.671681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.671687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.683241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.683678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.683695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.683716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.683884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.684051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.684060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.684067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.684074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.696285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.696711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.696728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.696735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.696903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.697072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.697080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.697086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.697091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.709200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.709632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.709648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.709658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.709852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.710025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.710032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.710039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.710045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.721972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.722385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.722401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.722408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.722567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.722749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.722757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.722763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.722769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.734718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.735137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.735153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.735160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.735319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.735478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.735485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.735491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.735496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.747547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.747992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.748009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.748016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.748184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.748354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.748362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.748368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.748374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.760421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.760852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.760896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.760919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.761501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.762046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.762054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.762060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.762066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.773215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.773645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.773661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.773673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.773841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.774008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.774016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.774022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.774028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.785979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.786357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.786372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.786379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:13.799 [2024-11-27 05:50:01.786547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:13.799 [2024-11-27 05:50:01.786719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:13.799 [2024-11-27 05:50:01.786728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:13.799 [2024-11-27 05:50:01.786738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:13.799 [2024-11-27 05:50:01.786744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:13.799 [2024-11-27 05:50:01.798976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:13.799 [2024-11-27 05:50:01.799332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.799 [2024-11-27 05:50:01.799349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:13.799 [2024-11-27 05:50:01.799356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.059 [2024-11-27 05:50:01.799529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.059 [2024-11-27 05:50:01.799709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.059 [2024-11-27 05:50:01.799721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.059 [2024-11-27 05:50:01.799728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.059 [2024-11-27 05:50:01.799734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.059 [2024-11-27 05:50:01.812100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.059 [2024-11-27 05:50:01.812446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.059 [2024-11-27 05:50:01.812463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.059 [2024-11-27 05:50:01.812470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.059 [2024-11-27 05:50:01.812643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.059 [2024-11-27 05:50:01.812825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.059 [2024-11-27 05:50:01.812837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.059 [2024-11-27 05:50:01.812844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.059 [2024-11-27 05:50:01.812852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.059 [2024-11-27 05:50:01.825094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.059 [2024-11-27 05:50:01.825475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.059 [2024-11-27 05:50:01.825491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.059 [2024-11-27 05:50:01.825498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.059 [2024-11-27 05:50:01.825679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.059 [2024-11-27 05:50:01.825852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.059 [2024-11-27 05:50:01.825860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.059 [2024-11-27 05:50:01.825867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.059 [2024-11-27 05:50:01.825873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.837984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.838390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.838432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.838454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.838964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.839138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.839146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.839152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.839158] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.850935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.851335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.851351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.851358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.851526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.851716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.851725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.851731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.851737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.863758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.864182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.864198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.864205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.864373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.864540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.864548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.864554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.864560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.876603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.877039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.877055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.877065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.877233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.877404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.877412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.877418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.877424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.889412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.889811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.889828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.889835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.890013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.890172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.890179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.890185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.890191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.902178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.902592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.902609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.902616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.902790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.902958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.902965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.902972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.902978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.914941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.915331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.915347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.915353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.915512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.915681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.915689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.915694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.915717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.927782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.928200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.928217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.928224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.928391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.928559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.928567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.928573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.928579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.940518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.940981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.941027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.941049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.941512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.941686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.941695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.941701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.060 [2024-11-27 05:50:01.941708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.060 [2024-11-27 05:50:01.953692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.060 [2024-11-27 05:50:01.954048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.060 [2024-11-27 05:50:01.954066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.060 [2024-11-27 05:50:01.954073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.060 [2024-11-27 05:50:01.954246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.060 [2024-11-27 05:50:01.954419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.060 [2024-11-27 05:50:01.954427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.060 [2024-11-27 05:50:01.954437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:01.954444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:01.966655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:01.967114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:01.967159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:01.967182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:01.967615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:01.967790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:01.967798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:01.967804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:01.967810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:01.979487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:01.979911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:01.979928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:01.979935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:01.980103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:01.980272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:01.980279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:01.980285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:01.980291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:01.992297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:01.992713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:01.992729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:01.992736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:01.992904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:01.993071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:01.993079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:01.993085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:01.993091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:02.005104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:02.005555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:02.005600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:02.005622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:02.006218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:02.006721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:02.006729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:02.006735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:02.006742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:02.018003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:02.018423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:02.018440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:02.018447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:02.018615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:02.018789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:02.018797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:02.018803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:02.018809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:02.030854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:02.031253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:02.031298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:02.031321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:02.031813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:02.031982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:02.031989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:02.031995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:02.032001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:02.043688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:02.044084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:02.044100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:02.044109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:02.044269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:02.044428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:02.044435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:02.044441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:02.044446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.061 [2024-11-27 05:50:02.056542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.061 [2024-11-27 05:50:02.056985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.061 [2024-11-27 05:50:02.057002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.061 [2024-11-27 05:50:02.057009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.061 [2024-11-27 05:50:02.057176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.061 [2024-11-27 05:50:02.057344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.061 [2024-11-27 05:50:02.057351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.061 [2024-11-27 05:50:02.057357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.061 [2024-11-27 05:50:02.057363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.322 5791.60 IOPS, 22.62 MiB/s [2024-11-27T04:50:02.326Z] [2024-11-27 05:50:02.069423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.322 [2024-11-27 05:50:02.069811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.069827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.069834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.069994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.070153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.070160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.070166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.070172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.082169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.082580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.082596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.082603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.082788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.082963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.082971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.082977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.082983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.094976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.095371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.095414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.095437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.095885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.096054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.096062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.096068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.096074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.107806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.108207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.108223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.108230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.108397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.108565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.108573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.108579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.108585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.120606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.121024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.121041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.121048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.121216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.121384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.121392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.121402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.121408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.133440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.133857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.133873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.133880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.134038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.134197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.134204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.134210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.134216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.146215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.146624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.146668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.146705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.147287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.147881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.147907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.147927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.147946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.158994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.159408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.159424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.159431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.159598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.159772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.323 [2024-11-27 05:50:02.159780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.323 [2024-11-27 05:50:02.159786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.323 [2024-11-27 05:50:02.159793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.323 [2024-11-27 05:50:02.171829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.323 [2024-11-27 05:50:02.172232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.323 [2024-11-27 05:50:02.172249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.323 [2024-11-27 05:50:02.172255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.323 [2024-11-27 05:50:02.172414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.323 [2024-11-27 05:50:02.172572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.172580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.172586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.172592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.184582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.184931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.184948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.184955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.185122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.185289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.185297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.185303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.185309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.197405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.197831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.197847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.197855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.198022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.198189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.198197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.198204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.198210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.210376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.210823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.210840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.210850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.211024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.211198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.211206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.211213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.211219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.223148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.223544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.223560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.223566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.223748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.223917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.223925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.223931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.223937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.235943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.236373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.236389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.236396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.236563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.236737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.236746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.236752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.236758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.248794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.249218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.249261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.249283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.249881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.250247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.250255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.250261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.250267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.261593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.262006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.262023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.262030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.262197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.262365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.262372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.262378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.262385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.274314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.274728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.274745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.274751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.274919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.275086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.275093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.275100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.324 [2024-11-27 05:50:02.275106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.324 [2024-11-27 05:50:02.287033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.324 [2024-11-27 05:50:02.287437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.324 [2024-11-27 05:50:02.287453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.324 [2024-11-27 05:50:02.287459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.324 [2024-11-27 05:50:02.287618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.324 [2024-11-27 05:50:02.287804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.324 [2024-11-27 05:50:02.287813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.324 [2024-11-27 05:50:02.287822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.325 [2024-11-27 05:50:02.287829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.325 [2024-11-27 05:50:02.299797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.325 [2024-11-27 05:50:02.300213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-11-27 05:50:02.300230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.325 [2024-11-27 05:50:02.300236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.325 [2024-11-27 05:50:02.300404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.325 [2024-11-27 05:50:02.300572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.325 [2024-11-27 05:50:02.300580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.325 [2024-11-27 05:50:02.300586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.325 [2024-11-27 05:50:02.300592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.325 [2024-11-27 05:50:02.312590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.325 [2024-11-27 05:50:02.313011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.325 [2024-11-27 05:50:02.313056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.325 [2024-11-27 05:50:02.313078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.325 [2024-11-27 05:50:02.313587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.325 [2024-11-27 05:50:02.313863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.325 [2024-11-27 05:50:02.313880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.325 [2024-11-27 05:50:02.313895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.325 [2024-11-27 05:50:02.313908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.585 [2024-11-27 05:50:02.327472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.585 [2024-11-27 05:50:02.327991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.585 [2024-11-27 05:50:02.328038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.585 [2024-11-27 05:50:02.328062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.585 [2024-11-27 05:50:02.328609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.585 [2024-11-27 05:50:02.328871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.585 [2024-11-27 05:50:02.328883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.585 [2024-11-27 05:50:02.328893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.585 [2024-11-27 05:50:02.328902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.585 [2024-11-27 05:50:02.340492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.585 [2024-11-27 05:50:02.340896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.585 [2024-11-27 05:50:02.340913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.585 [2024-11-27 05:50:02.340919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.585 [2024-11-27 05:50:02.341087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.585 [2024-11-27 05:50:02.341254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.585 [2024-11-27 05:50:02.341262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.585 [2024-11-27 05:50:02.341268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.585 [2024-11-27 05:50:02.341274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.585 [2024-11-27 05:50:02.353474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.585 [2024-11-27 05:50:02.353889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.585 [2024-11-27 05:50:02.353906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.585 [2024-11-27 05:50:02.353913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.585 [2024-11-27 05:50:02.354086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.585 [2024-11-27 05:50:02.354259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.585 [2024-11-27 05:50:02.354267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.354273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.354279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.366481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.366884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.366901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.366908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.367075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.586 [2024-11-27 05:50:02.367242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.586 [2024-11-27 05:50:02.367249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.367256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.367262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.379339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.379742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.379788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.379818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.380289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.586 [2024-11-27 05:50:02.380449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.586 [2024-11-27 05:50:02.380456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.380462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.380468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.392391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.392728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.392745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.392753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.392926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.586 [2024-11-27 05:50:02.393099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.586 [2024-11-27 05:50:02.393106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.393113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.393119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.405472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.405810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.405828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.405835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.406008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.586 [2024-11-27 05:50:02.406181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.586 [2024-11-27 05:50:02.406189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.406196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.406202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.418561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.418970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.418988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.418995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.419167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.586 [2024-11-27 05:50:02.419342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.586 [2024-11-27 05:50:02.419350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.419357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.419363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.431558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.431932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.431949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.431957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.432130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.586 [2024-11-27 05:50:02.432302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.586 [2024-11-27 05:50:02.432310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.432317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.432323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.444635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.445051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.445067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.445075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.445269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.586 [2024-11-27 05:50:02.445452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.586 [2024-11-27 05:50:02.445461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.586 [2024-11-27 05:50:02.445468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.586 [2024-11-27 05:50:02.445474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.586 [2024-11-27 05:50:02.457750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.586 [2024-11-27 05:50:02.458171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-11-27 05:50:02.458187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.586 [2024-11-27 05:50:02.458194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.586 [2024-11-27 05:50:02.458367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.458539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.458547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.458557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.458563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.470851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.587 [2024-11-27 05:50:02.471235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-11-27 05:50:02.471252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.587 [2024-11-27 05:50:02.471259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.587 [2024-11-27 05:50:02.471431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.471603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.471611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.471617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.471624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.483872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.587 [2024-11-27 05:50:02.484232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-11-27 05:50:02.484276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.587 [2024-11-27 05:50:02.484299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.587 [2024-11-27 05:50:02.484780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.484955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.484963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.484969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.484975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.496700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.587 [2024-11-27 05:50:02.496978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-11-27 05:50:02.496994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.587 [2024-11-27 05:50:02.497001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.587 [2024-11-27 05:50:02.497169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.497337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.497344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.497351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.497357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.509642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.587 [2024-11-27 05:50:02.510004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-11-27 05:50:02.510021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.587 [2024-11-27 05:50:02.510028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.587 [2024-11-27 05:50:02.510196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.510363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.510371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.510377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.510383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.522523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.587 [2024-11-27 05:50:02.522891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-11-27 05:50:02.522908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.587 [2024-11-27 05:50:02.522915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.587 [2024-11-27 05:50:02.523083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.523251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.523259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.523265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.523271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.535428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.587 [2024-11-27 05:50:02.535880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-11-27 05:50:02.535925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.587 [2024-11-27 05:50:02.535948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.587 [2024-11-27 05:50:02.536529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.536948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.536957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.536963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.536970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.548404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.587 [2024-11-27 05:50:02.548768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-11-27 05:50:02.548786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.587 [2024-11-27 05:50:02.548796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.587 [2024-11-27 05:50:02.548965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.587 [2024-11-27 05:50:02.549132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.587 [2024-11-27 05:50:02.549140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.587 [2024-11-27 05:50:02.549146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.587 [2024-11-27 05:50:02.549152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.587 [2024-11-27 05:50:02.561263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.588 [2024-11-27 05:50:02.561666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-11-27 05:50:02.561687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.588 [2024-11-27 05:50:02.561694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.588 [2024-11-27 05:50:02.561862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.588 [2024-11-27 05:50:02.562028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.588 [2024-11-27 05:50:02.562037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.588 [2024-11-27 05:50:02.562043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.588 [2024-11-27 05:50:02.562049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.588 [2024-11-27 05:50:02.574338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.588 [2024-11-27 05:50:02.574695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-11-27 05:50:02.574740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.588 [2024-11-27 05:50:02.574764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.588 [2024-11-27 05:50:02.575347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.588 [2024-11-27 05:50:02.575941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.588 [2024-11-27 05:50:02.575967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.588 [2024-11-27 05:50:02.575994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.588 [2024-11-27 05:50:02.576008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.848 [2024-11-27 05:50:02.589429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.848 [2024-11-27 05:50:02.589868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.848 [2024-11-27 05:50:02.589891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.848 [2024-11-27 05:50:02.589901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.848 [2024-11-27 05:50:02.590155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.848 [2024-11-27 05:50:02.590416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.848 [2024-11-27 05:50:02.590427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.848 [2024-11-27 05:50:02.590436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.848 [2024-11-27 05:50:02.590445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.848 [2024-11-27 05:50:02.602408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.848 [2024-11-27 05:50:02.602744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.848 [2024-11-27 05:50:02.602761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.848 [2024-11-27 05:50:02.602768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.848 [2024-11-27 05:50:02.602941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.848 [2024-11-27 05:50:02.603118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.848 [2024-11-27 05:50:02.603126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.848 [2024-11-27 05:50:02.603132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.848 [2024-11-27 05:50:02.603138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.848 [2024-11-27 05:50:02.615237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.848 [2024-11-27 05:50:02.615579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.848 [2024-11-27 05:50:02.615596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.848 [2024-11-27 05:50:02.615603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.848 [2024-11-27 05:50:02.615773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.848 [2024-11-27 05:50:02.615942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.848 [2024-11-27 05:50:02.615950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.848 [2024-11-27 05:50:02.615956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.848 [2024-11-27 05:50:02.615962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1916609 Killed "${NVMF_APP[@]}" "$@" 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1917902 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1917902 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1917902 ']' 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.848 [2024-11-27 05:50:02.628263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.848 [2024-11-27 05:50:02.628613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.848 [2024-11-27 05:50:02.628631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.848 [2024-11-27 05:50:02.628639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.848 [2024-11-27 05:50:02.628816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.848 [2024-11-27 05:50:02.628990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.848 [2024-11-27 05:50:02.628998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.848 [2024-11-27 05:50:02.629005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.848 [2024-11-27 05:50:02.629011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.848 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:14.848 [2024-11-27 05:50:02.641384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.848 [2024-11-27 05:50:02.641762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.848 [2024-11-27 05:50:02.641778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.848 [2024-11-27 05:50:02.641785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.848 [2024-11-27 05:50:02.641958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.848 [2024-11-27 05:50:02.642131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.848 [2024-11-27 05:50:02.642139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.848 [2024-11-27 05:50:02.642145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.848 [2024-11-27 05:50:02.642152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.848 [2024-11-27 05:50:02.654412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.848 [2024-11-27 05:50:02.654841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.848 [2024-11-27 05:50:02.654859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.848 [2024-11-27 05:50:02.654866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.848 [2024-11-27 05:50:02.655039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.848 [2024-11-27 05:50:02.655213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.848 [2024-11-27 05:50:02.655225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.655231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.655237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.667392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.667735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.667752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.667760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.667933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.668105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.668113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.668119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.668125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.672720] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:28:14.849 [2024-11-27 05:50:02.672759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.849 [2024-11-27 05:50:02.680492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.680784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.680800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.680807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.680976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.681144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.681151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.681158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.681164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.693522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.693899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.693915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.693923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.694091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.694264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.694272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.694278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.694284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.706658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.707008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.707025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.707032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.707206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.707379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.707387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.707393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.707400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.719687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.720045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.720062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.720069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.720241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.720413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.720420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.720426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.720432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.732811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.733148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.733165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.733173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.733346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.733518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.733525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.733532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.733547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.745855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.746148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.746165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.746172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.746345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.746519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.746527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.746534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.849 [2024-11-27 05:50:02.746540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.849 [2024-11-27 05:50:02.752100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:14.849 [2024-11-27 05:50:02.758866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.849 [2024-11-27 05:50:02.759195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.849 [2024-11-27 05:50:02.759212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.849 [2024-11-27 05:50:02.759220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.849 [2024-11-27 05:50:02.759387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.849 [2024-11-27 05:50:02.759555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.849 [2024-11-27 05:50:02.759563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.849 [2024-11-27 05:50:02.759572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.850 [2024-11-27 05:50:02.759580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.850 [2024-11-27 05:50:02.771864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.850 [2024-11-27 05:50:02.772162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.850 [2024-11-27 05:50:02.772179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.850 [2024-11-27 05:50:02.772186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.850 [2024-11-27 05:50:02.772358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.850 [2024-11-27 05:50:02.772531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.850 [2024-11-27 05:50:02.772540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.850 [2024-11-27 05:50:02.772546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.850 [2024-11-27 05:50:02.772553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.850 [2024-11-27 05:50:02.784758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.850 [2024-11-27 05:50:02.785168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.850 [2024-11-27 05:50:02.785185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.850 [2024-11-27 05:50:02.785193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.850 [2024-11-27 05:50:02.785361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.850 [2024-11-27 05:50:02.785528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.850 [2024-11-27 05:50:02.785536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.850 [2024-11-27 05:50:02.785543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.850 [2024-11-27 05:50:02.785549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.850 [2024-11-27 05:50:02.792541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.850 [2024-11-27 05:50:02.792566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.850 [2024-11-27 05:50:02.792573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.850 [2024-11-27 05:50:02.792580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.850 [2024-11-27 05:50:02.792587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.850 [2024-11-27 05:50:02.793972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.850 [2024-11-27 05:50:02.794081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.850 [2024-11-27 05:50:02.794082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.850 [2024-11-27 05:50:02.797849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.850 [2024-11-27 05:50:02.798270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.850 [2024-11-27 05:50:02.798289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.850 [2024-11-27 05:50:02.798299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.850 [2024-11-27 05:50:02.798473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.850 [2024-11-27 05:50:02.798648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.850 [2024-11-27 05:50:02.798656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.850 [2024-11-27 05:50:02.798664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.850 [2024-11-27 05:50:02.798677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.850 [2024-11-27 05:50:02.810894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.850 [2024-11-27 05:50:02.811249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.850 [2024-11-27 05:50:02.811269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.850 [2024-11-27 05:50:02.811276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.850 [2024-11-27 05:50:02.811449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.850 [2024-11-27 05:50:02.811628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.850 [2024-11-27 05:50:02.811636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.850 [2024-11-27 05:50:02.811644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.850 [2024-11-27 05:50:02.811651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.850 [2024-11-27 05:50:02.823883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.850 [2024-11-27 05:50:02.824197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.850 [2024-11-27 05:50:02.824218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.850 [2024-11-27 05:50:02.824228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.850 [2024-11-27 05:50:02.824401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.850 [2024-11-27 05:50:02.824576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.850 [2024-11-27 05:50:02.824584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.850 [2024-11-27 05:50:02.824591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.850 [2024-11-27 05:50:02.824598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:14.850 [2024-11-27 05:50:02.837018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:14.850 [2024-11-27 05:50:02.837492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.850 [2024-11-27 05:50:02.837511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:14.850 [2024-11-27 05:50:02.837520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:14.850 [2024-11-27 05:50:02.837701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:14.850 [2024-11-27 05:50:02.837875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:14.850 [2024-11-27 05:50:02.837883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:14.850 [2024-11-27 05:50:02.837890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:14.850 [2024-11-27 05:50:02.837897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 [2024-11-27 05:50:02.850121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.110 [2024-11-27 05:50:02.850588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.110 [2024-11-27 05:50:02.850609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.110 [2024-11-27 05:50:02.850617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.110 [2024-11-27 05:50:02.850798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.110 [2024-11-27 05:50:02.850972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.110 [2024-11-27 05:50:02.850981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.110 [2024-11-27 05:50:02.850996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.110 [2024-11-27 05:50:02.851004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 [2024-11-27 05:50:02.863233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.110 [2024-11-27 05:50:02.863675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.110 [2024-11-27 05:50:02.863694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.110 [2024-11-27 05:50:02.863702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.110 [2024-11-27 05:50:02.863875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.110 [2024-11-27 05:50:02.864050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.110 [2024-11-27 05:50:02.864058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.110 [2024-11-27 05:50:02.864065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.110 [2024-11-27 05:50:02.864072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 [2024-11-27 05:50:02.876270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.110 [2024-11-27 05:50:02.876704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.110 [2024-11-27 05:50:02.876722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.110 [2024-11-27 05:50:02.876729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.110 [2024-11-27 05:50:02.876902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.110 [2024-11-27 05:50:02.877076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.110 [2024-11-27 05:50:02.877085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.110 [2024-11-27 05:50:02.877092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.110 [2024-11-27 05:50:02.877098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 [2024-11-27 05:50:02.889303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.110 [2024-11-27 05:50:02.889730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.110 [2024-11-27 05:50:02.889747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.110 [2024-11-27 05:50:02.889755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.110 [2024-11-27 05:50:02.889928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.110 [2024-11-27 05:50:02.890101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.110 [2024-11-27 05:50:02.890109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.110 [2024-11-27 05:50:02.890116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.110 [2024-11-27 05:50:02.890123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.110 [2024-11-27 05:50:02.902320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.110 [2024-11-27 05:50:02.902691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.110 [2024-11-27 05:50:02.902708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.110 [2024-11-27 05:50:02.902715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.110 [2024-11-27 05:50:02.902888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.110 [2024-11-27 05:50:02.903061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.110 [2024-11-27 05:50:02.903069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.110 [2024-11-27 05:50:02.903075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.110 [2024-11-27 05:50:02.903081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 [2024-11-27 05:50:02.915341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.110 [2024-11-27 05:50:02.915682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.110 [2024-11-27 05:50:02.915700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.110 [2024-11-27 05:50:02.915707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.110 [2024-11-27 05:50:02.915879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.110 [2024-11-27 05:50:02.916052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.110 [2024-11-27 05:50:02.916061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.110 [2024-11-27 05:50:02.916067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.110 [2024-11-27 05:50:02.916073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 [2024-11-27 05:50:02.928451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.110 [2024-11-27 05:50:02.928817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.110 [2024-11-27 05:50:02.928835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.110 [2024-11-27 05:50:02.928842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.110 [2024-11-27 05:50:02.929015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.110 [2024-11-27 05:50:02.929187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.110 [2024-11-27 05:50:02.929194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.110 [2024-11-27 05:50:02.929201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.110 [2024-11-27 05:50:02.929207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.110 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.111 [2024-11-27 05:50:02.939236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.111 [2024-11-27 05:50:02.941421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-11-27 05:50:02.941693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-11-27 05:50:02.941710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-11-27 05:50:02.941717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.111 [2024-11-27 05:50:02.941891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.111 [2024-11-27 05:50:02.942064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-11-27 05:50:02.942072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-11-27 05:50:02.942078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-11-27 05:50:02.942085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.111 [2024-11-27 05:50:02.954655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-11-27 05:50:02.954934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-11-27 05:50:02.954952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-11-27 05:50:02.954959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.111 [2024-11-27 05:50:02.955134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.111 [2024-11-27 05:50:02.955311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-11-27 05:50:02.955319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-11-27 05:50:02.955326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-11-27 05:50:02.955332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 [2024-11-27 05:50:02.967712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-11-27 05:50:02.968151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-11-27 05:50:02.968169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-11-27 05:50:02.968176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.111 [2024-11-27 05:50:02.968353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.111 [2024-11-27 05:50:02.968526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-11-27 05:50:02.968534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-11-27 05:50:02.968540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-11-27 05:50:02.968546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 Malloc0 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.111 [2024-11-27 05:50:02.980748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-11-27 05:50:02.981158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-11-27 05:50:02.981175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-11-27 05:50:02.981182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.111 [2024-11-27 05:50:02.981355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.111 [2024-11-27 05:50:02.981527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-11-27 05:50:02.981534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-11-27 05:50:02.981541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-11-27 05:50:02.981547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.111 [2024-11-27 05:50:02.993761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 [2024-11-27 05:50:02.994202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.111 [2024-11-27 05:50:02.994219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d7510 with addr=10.0.0.2, port=4420 00:28:15.111 [2024-11-27 05:50:02.994226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d7510 is same with the state(6) to be set 00:28:15.111 [2024-11-27 05:50:02.994398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d7510 (9): Bad file descriptor 00:28:15.111 [2024-11-27 05:50:02.994572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:15.111 [2024-11-27 05:50:02.994580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:15.111 [2024-11-27 05:50:02.994587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:15.111 [2024-11-27 05:50:02.994593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.111 05:50:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:15.111 [2024-11-27 05:50:03.002041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.111 05:50:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.111 [2024-11-27 05:50:03.006798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:15.111 05:50:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1916868 00:28:15.111 4826.33 IOPS, 18.85 MiB/s [2024-11-27T04:50:03.115Z] [2024-11-27 05:50:03.068943] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:17.418 5747.57 IOPS, 22.45 MiB/s [2024-11-27T04:50:06.358Z] 6442.62 IOPS, 25.17 MiB/s [2024-11-27T04:50:07.296Z] 7002.89 IOPS, 27.36 MiB/s [2024-11-27T04:50:08.234Z] 7437.70 IOPS, 29.05 MiB/s [2024-11-27T04:50:09.170Z] 7789.00 IOPS, 30.43 MiB/s [2024-11-27T04:50:10.106Z] 8076.42 IOPS, 31.55 MiB/s [2024-11-27T04:50:11.486Z] 8323.92 IOPS, 32.52 MiB/s [2024-11-27T04:50:12.424Z] 8521.50 IOPS, 33.29 MiB/s [2024-11-27T04:50:12.424Z] 8716.60 IOPS, 34.05 MiB/s 00:28:24.420 Latency(us) 00:28:24.420 [2024-11-27T04:50:12.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.420 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:24.420 Verification LBA range: start 0x0 length 0x4000 00:28:24.420 Nvme1n1 : 15.01 8720.12 34.06 11035.19 0.00 6459.35 438.86 15666.22 00:28:24.420 [2024-11-27T04:50:12.424Z] =================================================================================================================== 00:28:24.420 [2024-11-27T04:50:12.424Z] Total : 8720.12 34.06 11035.19 0.00 6459.35 438.86 15666.22 00:28:24.420 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:24.420 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.420 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.420 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:24.420 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.420 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.421 rmmod nvme_tcp 00:28:24.421 rmmod nvme_fabrics 00:28:24.421 rmmod nvme_keyring 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1917902 ']' 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1917902 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1917902 ']' 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1917902 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917902 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917902' 00:28:24.421 killing process with pid 1917902 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1917902 00:28:24.421 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1917902 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.680 05:50:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:27.220 00:28:27.220 real 0m26.089s 00:28:27.220 user 1m0.901s 00:28:27.220 sys 0m6.719s 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.220 ************************************ 00:28:27.220 END TEST nvmf_bdevperf 00:28:27.220 ************************************ 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.220 ************************************ 00:28:27.220 START TEST nvmf_target_disconnect 00:28:27.220 ************************************ 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:27.220 * Looking for test storage... 00:28:27.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.220 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:27.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.221 --rc genhtml_branch_coverage=1 00:28:27.221 --rc genhtml_function_coverage=1 00:28:27.221 --rc genhtml_legend=1 00:28:27.221 --rc geninfo_all_blocks=1 00:28:27.221 --rc geninfo_unexecuted_blocks=1 00:28:27.221 00:28:27.221 ' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:27.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.221 --rc genhtml_branch_coverage=1 00:28:27.221 --rc genhtml_function_coverage=1 00:28:27.221 --rc genhtml_legend=1 00:28:27.221 --rc geninfo_all_blocks=1 00:28:27.221 --rc geninfo_unexecuted_blocks=1 00:28:27.221 00:28:27.221 ' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:27.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.221 --rc genhtml_branch_coverage=1 00:28:27.221 --rc genhtml_function_coverage=1 00:28:27.221 --rc genhtml_legend=1 00:28:27.221 --rc geninfo_all_blocks=1 00:28:27.221 --rc geninfo_unexecuted_blocks=1 00:28:27.221 00:28:27.221 ' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:27.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.221 --rc genhtml_branch_coverage=1 00:28:27.221 --rc genhtml_function_coverage=1 00:28:27.221 --rc genhtml_legend=1 00:28:27.221 --rc geninfo_all_blocks=1 00:28:27.221 --rc geninfo_unexecuted_blocks=1 00:28:27.221 00:28:27.221 ' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:27.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.221 05:50:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.793 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.793 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.793 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:28:33.794 00:28:33.794 --- 10.0.0.2 ping statistics --- 00:28:33.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.794 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:28:33.794 00:28:33.794 --- 10.0.0.1 ping statistics --- 00:28:33.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.794 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:33.794 ************************************ 00:28:33.794 START TEST nvmf_target_disconnect_tc1 00:28:33.794 ************************************ 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:33.794 [2024-11-27 05:50:20.979592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.794 [2024-11-27 05:50:20.979715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5dcac0 with addr=10.0.0.2, port=4420 00:28:33.794 [2024-11-27 05:50:20.979775] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:33.794 [2024-11-27 05:50:20.979801] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:33.794 [2024-11-27 05:50:20.979820] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:33.794 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:33.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:33.794 Initializing NVMe Controllers 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:33.794 00:28:33.794 real 0m0.117s 00:28:33.794 user 0m0.049s 00:28:33.794 sys 0m0.067s 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.794 05:50:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.794 ************************************ 00:28:33.794 END TEST nvmf_target_disconnect_tc1 00:28:33.794 ************************************ 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:33.794 ************************************ 00:28:33.794 START TEST nvmf_target_disconnect_tc2 00:28:33.794 ************************************ 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1922956 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1922956 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1922956 ']' 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.794 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:33.794 [2024-11-27 05:50:21.119864] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:28:33.794 [2024-11-27 05:50:21.119907] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.794 [2024-11-27 05:50:21.201875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.794 [2024-11-27 05:50:21.243324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.794 [2024-11-27 05:50:21.243363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.794 [2024-11-27 05:50:21.243371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.794 [2024-11-27 05:50:21.243377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.794 [2024-11-27 05:50:21.243382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.794 [2024-11-27 05:50:21.245070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:33.794 [2024-11-27 05:50:21.245177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:33.794 [2024-11-27 05:50:21.245283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:33.794 [2024-11-27 05:50:21.245284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.054 05:50:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.054 Malloc0 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.054 [2024-11-27 05:50:22.031167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.054 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.054 [2024-11-27 05:50:22.056140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1923204 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:34.314 05:50:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.231 05:50:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1922956 00:28:36.231 05:50:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Write completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Write completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Write completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Write completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Write completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Write completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.231 starting I/O failed 00:28:36.231 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 [2024-11-27 05:50:24.084213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 [2024-11-27 05:50:24.084413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 [2024-11-27 05:50:24.084602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Read completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 Write completed with error (sct=0, sc=8) 00:28:36.232 starting I/O failed 00:28:36.232 [2024-11-27 05:50:24.084819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:36.232 [2024-11-27 05:50:24.085082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.232 [2024-11-27 05:50:24.085106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.232 qpair failed and we were unable to recover it. 00:28:36.232 [2024-11-27 05:50:24.085269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.232 [2024-11-27 05:50:24.085280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.232 qpair failed and we were unable to recover it. 00:28:36.232 [2024-11-27 05:50:24.085450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.085462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.085662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.085678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.085853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.085864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.086950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.086961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.087951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.087961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.088980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.088989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.089059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.089069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.089240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.089250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.089474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.089485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.089626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.089636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.089728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.089738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.089873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.089884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.090075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.090086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.090160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.090169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.090440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.090472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.090664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.090704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.233 qpair failed and we were unable to recover it. 00:28:36.233 [2024-11-27 05:50:24.090908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.233 [2024-11-27 05:50:24.090938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.091074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.091106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.091388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.091420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.091606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.091637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.091873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.092053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.092114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.092336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.092370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.092596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.092610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.092815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.092834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.093023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.093055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.093262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.093293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.093481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.093513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.093701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.093714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.093819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.093833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.093929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.093941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.094984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.094997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.095144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.095157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.095379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.095393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.095592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.095605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.095808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.095823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.095938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.095951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.096082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.096095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.096240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.096254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.096427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.096441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.096588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.096601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.096758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.096772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.096872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.096885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.096975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.096988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.097077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.097092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.234 [2024-11-27 05:50:24.097197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.234 [2024-11-27 05:50:24.097211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.234 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.097380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.097392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.097629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.097680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.097921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.097952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.098151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.098182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.098504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.098517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.098739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.098771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.098918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.098950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.099098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.099128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.099320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.099351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.099535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.099567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.099768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.099799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.099926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.099957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.100094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.100126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.100248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.100279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.100536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.100567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.100821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.100853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.101039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.101070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.101265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.101295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.101421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.101453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.101623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.101654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.101884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.101917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.102098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.102130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.102259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.102289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.102526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.102556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.102798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.102831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.103029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.103066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.103250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.103281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.103481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.103513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.103664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.103703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.103858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.103890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.104075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.104106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.104229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.104261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.104566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.104598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.104775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.104808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.104948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.104979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.105178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.105209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.105334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.105366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.235 [2024-11-27 05:50:24.105566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.235 [2024-11-27 05:50:24.105596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.235 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.105831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.105863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.105995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.106027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.106270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.106301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.106498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.106529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.106657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.106700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.106839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.106870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.107054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.107086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.107276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.107308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.107568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.107599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.107840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.107871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.108111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.108143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.108289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.108321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.108560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.108592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.108860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.108894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.109046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.109079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.109281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.109313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.109509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.109540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.109756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.109789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.109915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.109947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.110147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.110179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.110393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.110424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.110605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.110636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.110889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.110921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.111210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.111242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.111430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.111461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.111644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.111682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.111943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.111975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.112119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.112156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.112362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.112393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.112537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.112585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.112857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.112889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.236 qpair failed and we were unable to recover it. 00:28:36.236 [2024-11-27 05:50:24.113087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.236 [2024-11-27 05:50:24.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.113382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.113413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.113652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.113690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.113897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.113928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.114102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.114134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.114274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.114305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.114475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.114506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.114746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.114780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.114958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.114990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.115128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.115159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.115338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.115370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.115634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.115664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.115823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.115854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.115974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.116006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.116149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.116180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.116456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.116488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.116686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.116720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.116955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.116986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.117224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.117255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.117496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.117528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.117736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.117769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.117920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.117951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.118146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.118176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.118387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.118419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.118675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.118708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.118944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.118976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.119113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.119144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.119289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.119321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.119586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.119617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.119813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.119845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.120062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.120094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.120422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.120453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.120727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.120760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.120899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.120930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.121059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.121090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.121275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.121306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.121490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.121527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.121721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.237 [2024-11-27 05:50:24.121752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.237 qpair failed and we were unable to recover it. 00:28:36.237 [2024-11-27 05:50:24.121887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.121918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.122057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.122088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.122278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.122309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.122596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.122627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.122837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.122869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.123065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.123097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.123308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.123339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.123448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.123480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.123684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.123716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.123910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.123942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.124086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.124118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.124328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.124359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.124639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.124699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.124842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.124873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.125112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.125144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.125360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.125392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.125620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.125651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.125912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.125944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.126203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.126234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.126468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.126498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.126678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.126711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.126963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.126995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.127110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.127141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.127319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.127350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.127563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.127594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.127808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.127841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.127975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.128006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.128199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.128231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.128427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.128457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.128685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.128718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.128838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.128869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.129064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.129096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.129289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.129328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.129456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.129487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.129710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.129743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.129889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.129920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.130140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.130171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.130373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.238 [2024-11-27 05:50:24.130404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.238 qpair failed and we were unable to recover it. 00:28:36.238 [2024-11-27 05:50:24.130645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.130693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.130881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.130913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.131110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.131141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.131276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.131308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.131495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.131526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.131874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.131907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.132045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.132076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.132196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.132226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.132417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.132448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.132585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.132616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.132840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.132872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.133055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.133104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.133343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.133489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.133520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.133721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.133754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.133971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.134004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.134186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.134217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.134488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.134520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.134762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.134795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.134931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.134962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.135181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.135212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.135408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.135439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.135715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.135747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.135963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.135995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.136190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.136222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.136484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.136517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.136759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.136792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.137011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.137042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.137172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.137203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.137472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.137504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.137720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.137753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.137928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.137959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.138135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.138167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.138369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.138400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.138621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.138652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.138914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.138946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.139090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.139121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.139249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.239 [2024-11-27 05:50:24.139280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.239 qpair failed and we were unable to recover it. 00:28:36.239 [2024-11-27 05:50:24.139478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.139510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.139707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.139740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.139917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.139955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.140099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.140130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.140433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.140465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.140668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.140710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.140928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.140959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.141147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.141178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.141308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.141339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.141537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.141569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.141688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.141721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.141858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.141890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.142195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.142225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.142443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.142474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.142720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.142752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.142973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.143004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.143206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.143238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.143409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.143441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.143631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.143661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.143821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.143854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.144043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.144075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.144188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.144219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.144339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.144370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.144540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.144573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.144778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.144811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.144996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.145293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.145324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.145497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.145529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.145802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.145835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.146029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.146060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.146254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.146286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.146428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.146459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.146739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.146772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.146968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.146999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.147240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.147272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.147515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.147546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.147798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.147830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.148052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.240 [2024-11-27 05:50:24.148084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.240 qpair failed and we were unable to recover it. 00:28:36.240 [2024-11-27 05:50:24.148359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.148390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.148575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.148606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.148725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.148758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.148971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.149003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.149193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.149235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.149446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.149477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.149664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.149705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.149977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.150008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.150219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.150251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.150425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.150456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.150638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.150667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.150817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.150849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.150978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.151010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.151153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.151185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.151446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.151477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.151766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.151799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.151985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.152017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.152200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.152231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.152492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.152524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.152694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.152728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.152974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.153005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.153180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.153212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.153468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.153499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.153685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.153717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.153858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.153889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.154063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.154094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.154270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.154301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.154523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.154554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.154771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.154804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.155070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.155102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.155302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.155333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.155590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.155622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.155845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.241 [2024-11-27 05:50:24.155877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.241 qpair failed and we were unable to recover it. 00:28:36.241 [2024-11-27 05:50:24.156128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.156160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.156363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.156393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.156587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.156618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.156893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.156925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.157214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.157245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.157458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.157489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.157733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.157765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.158057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.158088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.158278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.158309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.158485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.158516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.158701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.158733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.158937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.158974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.159235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.159266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.159460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.159491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.159684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.159717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.159915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.159947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.160084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.160115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.160237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.160269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.160539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.160571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.160690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.160723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.160909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.160941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.161137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.161168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.161413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.161444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.161717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.161751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.161941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.161973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.162153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.162185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.162473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.162504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.162812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.162862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.163009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.163041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.163334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.163366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.163579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.163611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.163751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.163783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.163993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.164025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.164178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.164208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.242 [2024-11-27 05:50:24.164425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.242 [2024-11-27 05:50:24.164457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.242 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.164583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.164614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.164877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.164909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.165128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.165159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.165494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.165526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.165681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.165716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.165913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.165944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.166086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.166118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.166333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.166364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.166554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.166585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.166825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.166859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.167056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.167088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.167322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.167353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.167545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.167577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.167814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.167846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.168018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.168050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.168263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.168294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.168488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.168525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.168752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.168784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.168910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.168942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.169196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.169228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.169502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.169533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.169723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.169756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.169888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.169920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.170114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.170145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.170373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.170405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.170580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.170611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.170840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.170872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.171000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.171032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.171234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.171264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.171461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.171492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.171706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.171739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.171928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.171960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.172106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.172137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.172276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.172307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.243 [2024-11-27 05:50:24.172515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.243 [2024-11-27 05:50:24.172547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.243 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.172814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.172847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.172994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.173044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.173364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.173396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.173668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.173709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.173994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.174027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.174245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.174276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.174413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.174444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.174570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.174601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.174736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.174771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.174961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.174993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.175172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.175203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.175542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.175574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.175729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.175762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.175889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.175920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.176166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.176197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.176454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.176485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.176737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.176770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.176967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.177000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.177205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.177236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.177452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.177483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.177620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.177652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.177867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.177905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.178022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.178053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.178316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.178348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.178592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.178623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.178820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.178853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.179099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.179131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.179282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.179312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.179607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.179638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.179897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.179931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.180142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.180173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.180298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.180328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.180594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.180625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.180856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.180888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.181081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.181112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.181250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.181281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.244 [2024-11-27 05:50:24.181477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.244 [2024-11-27 05:50:24.181509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.244 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.181804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.181837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.182034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.182066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.182206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.182237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.182439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.182471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.182718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.182751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.182883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.182915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.183118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.183150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.183501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.183532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.183750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.183783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.184058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.184089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.184290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.184320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.184576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.184608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.184766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.184799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.184982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.185013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.185258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.185290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.185540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.185571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.185772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.185805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.186048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.186079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.186232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.186263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.186506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.186537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.186729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.186761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.187010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.187041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.187234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.187265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.187456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.187487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.187755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.187793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.188079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.188111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.188390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.188421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.188710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.188742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.188973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.189005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.189252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.189283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.189491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.189523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.189818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.189852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.190087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.245 [2024-11-27 05:50:24.190119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.245 qpair failed and we were unable to recover it. 00:28:36.245 [2024-11-27 05:50:24.190387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.190418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.190563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.190595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.190823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.190855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.191142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.191173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.191293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.191324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.191545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.191576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.191851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.191885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.192018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.192050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.192248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.192279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.192488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.192519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.192725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.192759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.192959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.192990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.193171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.193202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.193413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.193445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.193640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.193682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.193862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.193894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.194075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.194107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.194283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.194314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.194616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.194648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.194968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.195000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.195212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.195244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.195431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.195461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.195709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.195741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.195886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.195918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.196188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.196219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.196507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.196540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.196768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.196801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.196999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.197030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.197219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.197250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.197475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.197507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.197648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.197689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.197892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.197934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.198179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.198212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.198340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.198371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.198560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.198592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.198780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.198812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.199084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.199115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.199430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.246 [2024-11-27 05:50:24.199462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.246 qpair failed and we were unable to recover it. 00:28:36.246 [2024-11-27 05:50:24.199601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.199633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.199864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.199897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.200073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.200105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.200302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.200333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.200602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.200634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.200837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.200870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.201067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.201098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.201235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.201267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.201472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.201504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.201771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.201804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.202007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.202039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.202261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.202292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.202429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.202460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.202735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.202767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.202919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.202951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.203159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.203191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.203400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.203431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.203689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.203722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.203867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.203899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.204079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.204110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.204337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.204370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.204622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.204652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.204875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.204907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.205110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.205141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.205466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.205498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.205703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.205736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.205870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.205903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.206198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.206231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.206529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.206772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.206806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.207002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.207034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.207308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.207339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.207789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.207828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.207977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.208008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.208147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.208178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.208312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.208344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.208526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.247 [2024-11-27 05:50:24.208557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.247 qpair failed and we were unable to recover it. 00:28:36.247 [2024-11-27 05:50:24.208821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.208855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.208985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.209017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.209301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.209332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.209603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.209635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.209870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.209903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.210085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.210116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.210296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.210328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.210527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.210558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.210699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.210731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.210951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.210984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.211291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.211324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.211523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.211555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.211826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.211858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.212004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.212035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.212341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.212371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.212548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.212580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.212821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.212854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.213050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.213081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.213291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.213323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.213456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.213488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.213750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.213782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.213964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.213995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.214237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.214268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.214474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.214506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.214647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.214701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.214955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.214986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.215193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.215225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.215428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.215459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.215737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.215769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.216022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.216054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.216254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.216286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.216442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.216473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.216684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.216717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.216904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.216936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.217208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.217240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.217431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.217468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.217728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.217762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.248 [2024-11-27 05:50:24.217973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.248 [2024-11-27 05:50:24.218004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.248 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.218208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.218241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.218442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.218473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.218725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.218757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.219008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.219040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.219186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.219217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.219399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.219430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.219700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.219733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.219879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.219910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.220175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.220207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.220404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.220436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.220631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.220663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.221004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.221036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.221239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.221272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.221459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.221490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.221782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.221814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.222026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.222059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.222264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.222295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.222421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.222452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.222653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.222705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.222936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.222967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.223093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.223124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.249 [2024-11-27 05:50:24.223392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.249 [2024-11-27 05:50:24.223425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.249 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.223690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.223726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.223880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.223913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.224060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.224092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.224310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.224342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.224648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.224690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.224821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.224854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.225064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.225096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.226782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.226862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.227062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.227096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.544 [2024-11-27 05:50:24.227285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.544 [2024-11-27 05:50:24.227319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.544 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.227525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.227558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.227701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.227735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.227920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.227952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.228150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.228182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.228321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.228353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.228560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.228593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.228861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.228896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.229047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.229079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.229228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.229261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.229497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.229529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.229749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.229782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.229986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.230018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.230163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.230197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.230496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.230528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.230759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.230792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.231013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.231045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.231267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.231298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.231491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.231522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.231776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.231812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.232092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.232124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.232388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.232420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.232618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.232650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.232863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.232895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.233057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.233088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.233292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.233325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.233516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.233548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.233892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.233925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.234192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.234225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.234436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.234468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.234707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.234741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.234880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.234912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.235174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.235205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.235521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.235559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.235767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.235799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.236078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.236112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.236309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.236340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.236600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.236632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.545 qpair failed and we were unable to recover it. 00:28:36.545 [2024-11-27 05:50:24.236832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.545 [2024-11-27 05:50:24.236865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.237137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.237169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.237449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.237481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.237693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.237726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.237981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.238014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.238146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.238178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.238425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.238457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.238719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.238754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.239065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.239098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.239385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.239416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.239598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.239630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.239900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.239933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.240211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.240243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.240550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.240582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.240774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.240808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.241006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.241038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.241235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.241266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.241541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.241574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.241707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.241739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.241938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.241971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.242130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.242161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.242299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.242330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.242601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.242635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.242830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.242864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.243073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.243105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.243239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.243270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.243467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.243498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.243692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.243725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.243880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.243912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.244043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.244075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.244212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.244244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.244471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.244503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.244757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.244791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.244923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.244955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.245142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.245173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.245445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.245483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.245639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.245681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.245868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.546 [2024-11-27 05:50:24.245900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.546 qpair failed and we were unable to recover it. 00:28:36.546 [2024-11-27 05:50:24.246157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.246188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.246402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.246436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.246701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.246735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.246959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.246991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.247197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.247229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.247541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.247573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.247806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.247841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.248029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.248060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.248314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.248347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.248631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.248663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.248950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.248982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.249311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.249344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.249547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.249578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.249822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.249856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.250065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.250098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.250343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.250377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.250573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.250604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.250791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.250825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.251075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.251107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.251291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.251323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.251578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.251609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.251822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.251855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.252001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.252032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.252214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.252247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.252458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.252491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.252774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.252808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.252974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.253006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.253313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.253346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.253587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.253619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.253879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.253915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.254054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.254086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.254242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.254531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.254563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.254782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.254818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.255028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.255060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.255221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.255253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.255476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.255509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.255820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.547 [2024-11-27 05:50:24.255859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.547 qpair failed and we were unable to recover it. 00:28:36.547 [2024-11-27 05:50:24.256066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.256101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.256292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.256323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.256530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.256566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.256714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.256748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.256963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.256996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.257300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.257332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.257472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.257503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.257787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.257822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.257982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.258014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.258207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.258239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.258454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.258486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.258742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.258776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.258973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.259006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.259161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.259193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.259342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.259374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.259637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.259679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.259871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.259903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.260155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.260187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.260401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.260432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.260713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.260747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.260966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.260999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.261204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.261236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.261422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.261454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.261712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.261746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.261882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.261914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.262099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.262130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.262471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.262504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.262721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.262756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.262958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.262990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.263122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.263154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.263452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.263485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.263793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.263827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.264093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.264125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.264328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.264360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.264571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.264604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.264758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.264791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.264997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.265030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.265173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.548 [2024-11-27 05:50:24.265205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.548 qpair failed and we were unable to recover it. 00:28:36.548 [2024-11-27 05:50:24.265504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.265536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.265781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.265822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.266145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.266178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.266452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.266485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.266626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.266658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.266852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.266884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.267069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.267101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.267289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.267320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.267522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.267554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.267827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.267861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.268071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.268103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.268303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.268335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.268516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.268548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.268808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.268842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.269040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.269071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.269336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.269368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.269580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.269613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.269834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.269867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.270124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.270157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.270367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.270400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.270701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.270735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.271016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.271048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.271196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.271228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.271460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.271492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.271698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.271731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.271853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.271885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.272021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.272054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.272205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.272237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.272463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.272496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.272642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.272688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.272985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.273017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.273261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.549 [2024-11-27 05:50:24.273293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.549 qpair failed and we were unable to recover it. 00:28:36.549 [2024-11-27 05:50:24.273493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.273524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.273725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.273759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.273896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.273928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.274121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.274154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.274367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.274400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.274714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.274748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.274898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.274930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.275137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.275168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.275302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.275334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.275614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.275657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.275895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.275928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.276111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.276143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.276385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.276418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.276648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.276695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.276905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.276937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.277093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.277125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.277448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.277480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.277627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.277658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.277930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.277965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.278184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.278215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.278432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.278462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.278572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.278804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.278838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.279120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.279152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.279460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.279491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.279694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.279727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.279938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.279971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.280229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.280262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.280479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.280511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.280729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.280762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.280984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.281016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.281188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.281220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.281424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.281455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.281738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.281771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.281993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.282025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.282157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.282188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.282495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.282527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.550 [2024-11-27 05:50:24.282724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.550 [2024-11-27 05:50:24.282758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.550 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.282962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.282994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.283201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.283232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.283374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.283407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.283660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.283706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.283903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.283934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.284192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.284223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.284512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.284544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.284771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.284805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.285037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.285069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.285353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.285385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.285539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.285572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.285697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.285736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.285954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.285986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.286287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.286321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.286532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.286564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.288125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.288184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.288418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.288452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.288689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.288724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.288874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.288906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.289134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.289166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.289374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.289407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.289605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.289636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.289890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.289923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.290122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.290155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.290411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.290444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.290732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.290767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.290909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.290941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.291148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.291180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.291435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.291467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.291742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.291775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.292021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.292054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.292247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.292280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.294326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.294391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.294600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.294633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.294861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.294896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.295123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.295156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.295425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.295457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.551 [2024-11-27 05:50:24.295747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.551 [2024-11-27 05:50:24.295782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.551 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.296040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.296073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.296270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.296303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.296507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.296540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.296668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.296713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.296931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.296963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.297117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.297148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.297354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.297386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.297601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.297632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.297848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.297881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.298139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.298172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.298315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.298347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.298542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.298574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.298788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.298822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.298961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.299000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.299135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.299168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.299367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.299399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.299590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.299622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.299928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.299962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.300112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.300143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.300282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.300315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.300544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.300581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.300766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.300798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.300956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.300987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.301318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.301351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.301548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.301580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.301791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.301825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.301987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.302020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.302152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.302185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.302420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.302453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.302589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.302621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.302948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.302982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.303144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.303177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.303469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.303502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.303740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.303774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.303972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.304005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.304218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.304249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.304384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.304416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.552 [2024-11-27 05:50:24.304704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.552 [2024-11-27 05:50:24.304739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.552 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.304897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.304929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.305135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.305168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.305379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.305412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.305537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.305569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.305824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.305857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.306055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.306087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.306228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.306260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.306391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.306423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.306736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.306771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.307029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.307061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.307440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.307472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.307614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.307647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.307916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.307950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.308110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.308143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.308369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.308401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.308597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.308635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.308802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.308835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.309056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.309088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.309271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.309304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.309527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.309560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.309773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.309807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.309949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.309981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.310137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.310169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.310308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.310340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.310539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.310571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.310717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.310751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.310951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.310983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.311187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.311219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.311522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.311555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.311771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.311804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.311937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.311970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.312179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.312211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.553 qpair failed and we were unable to recover it. 00:28:36.553 [2024-11-27 05:50:24.312378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.553 [2024-11-27 05:50:24.312410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.312629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.312661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.312889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.312921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.313052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.313084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.313311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.313343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.313542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.313574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.313715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.313749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.313954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.313986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.314146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.314178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.314415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.314447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.314761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.314796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.314962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.314995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.315199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.315231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.315452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.315485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.315812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.315846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.316001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.316033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.316237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.316269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.316466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.316498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.316692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.316726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.316881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.316912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.317051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.317083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.317229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.317261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.317526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.317558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.317813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.317853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.318131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.318163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.318462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.318494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.318720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.318755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.318965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.318997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.319205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.319238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.319494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.319525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.319721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.319754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.319981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.320013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.320164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.320196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.320306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.320338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.320614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.320646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.320854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.320887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.321143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.321175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.321434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.321467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.554 qpair failed and we were unable to recover it. 00:28:36.554 [2024-11-27 05:50:24.321692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.554 [2024-11-27 05:50:24.321726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.321861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.321893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.322090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.322122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.322403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.322435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.322663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.322710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.322920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.322952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.323088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.323120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.323255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.323286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.323446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.323479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.323712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.323746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.323944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.323976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.324112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.324143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.324410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.324444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.324589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.324621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.324811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.324846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.325049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.325081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.325209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.325241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.325540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.325573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.325784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.325818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.326024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.326057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.326300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.326500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.326532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.326737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.326772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.326978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.327009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.327152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.327185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.327331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.327369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.327645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.327691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.327894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.327925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.328180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.328211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.328445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.328476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.328734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.328768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.328977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.329009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.329312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.329343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.329538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.329569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.329795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.329829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.329969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.330000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.330203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.330234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.330449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.330482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.555 qpair failed and we were unable to recover it. 00:28:36.555 [2024-11-27 05:50:24.330691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.555 [2024-11-27 05:50:24.330744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.330895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.330927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.331079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.331112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.331413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.331444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.331739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.331772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.331973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.332004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.332211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.332243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.332443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.332474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.332776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.332810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.333051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.333083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.333316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.333348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.333514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.333546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.333763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.333797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.333961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.333993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.334281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.334359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.334665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.334715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.334942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.334976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.335134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.335166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.335431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.335464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.335668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.335714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.335925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.335958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.336108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.336139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.336431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.336463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.336660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.336703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.336910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.336944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.337196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.337229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.337428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.337461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.337691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.337736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.337893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.337927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.338116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.338147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.338438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.338473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.338681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.338715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.338874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.338907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.339170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.339203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.339418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.339451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.339734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.339769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.339919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.339953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.340207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.340239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.556 qpair failed and we were unable to recover it. 00:28:36.556 [2024-11-27 05:50:24.340443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.556 [2024-11-27 05:50:24.340475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.340660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.340702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.340838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.340870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.341013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.341047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.341316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.341350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.341642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.341683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.341843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.341876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.342072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.342105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.342302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.342334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.342632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.342663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.342871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.342903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.343034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.343067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.343368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.343401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.343614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.343774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.343807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.344003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.344035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.344260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.344301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.344512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.344546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.344760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.344794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.345003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.345037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.345241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.345273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.345410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.345442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.345656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.345701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.345916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.345948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.346144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.346177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.346341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.346373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.346570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.346603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.346760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.346794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.347077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.347111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.347387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.347428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.347579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.347611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.347765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.347799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.348005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.348038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.348250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.348282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.348475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.348507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.348744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.348780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.348916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.348948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.349111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.349144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.557 [2024-11-27 05:50:24.349348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.557 [2024-11-27 05:50:24.349381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.557 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.349575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.349608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.349749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.349782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.350041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.350074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.350397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.350430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.350649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.350712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.350872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.350904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.351179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.351212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.351344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.351377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.351604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.351636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.351839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.351872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.352066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.352098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.352423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.352455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.352697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.352730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.352926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.352959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.353212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.353246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.353585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.353617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.353857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.353891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.354059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.354098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.354250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.354283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.354508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.354540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.354666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.354708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.354922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.354954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.355110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.355143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.355363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.355397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.355597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.355628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.355825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.355859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.356004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.356035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.356314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.356347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.356480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.356511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.356717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.356749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.356945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.356979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.357183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.357217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.558 [2024-11-27 05:50:24.357366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.558 [2024-11-27 05:50:24.357398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.558 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.357586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.357618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.357778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.357810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.357948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.357981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.358108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.358139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.358262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.358294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.358429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.358460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.358617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.358648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.358811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.358842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.359058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.359091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.359219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.359252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.359480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.359513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.359717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.359751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.359897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.359928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.360055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.360086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.360293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.360327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.360446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.360478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.360611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.360644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.360931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.360964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.361107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.361139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.361321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.361353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.361484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.361514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.361641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.361682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.361798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.361831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.362053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.362085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.362209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.362247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.362433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.362465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.362600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.362632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.362761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.362794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.363071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.363103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.363221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.363252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.363379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.363412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.363531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.363563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.363698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.363731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.363916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.363948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.364149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.364183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.364485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.364517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.364721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.364754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.559 [2024-11-27 05:50:24.364898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.559 [2024-11-27 05:50:24.364931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.559 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.365056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.365087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.365267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.365300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.365498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.365530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.365654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.365696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.365809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.365841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.365994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.366027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.366208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.366239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.366376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.366408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.366591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.366623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.366816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.366850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.367054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.367086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.367270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.367301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.367433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.367465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.367682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.367716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.367942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.367975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.368113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.368145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.368340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.368372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.368553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.368585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.368800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.368833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.368965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.368996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.369119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.369151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.369339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.369370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.369569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.369599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.369832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.369866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.370067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.370098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.370226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.370260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.370393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.370429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.370625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.370656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.370799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.370829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.371031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.371064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.371220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.371251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.371527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.371560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.371815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.371849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.371976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.372009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.372212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.372245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.372373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.372404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.372536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.372569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.560 [2024-11-27 05:50:24.372701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.560 [2024-11-27 05:50:24.372733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.560 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.372914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.372945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.373063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.373095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.373318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.373350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.373471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.373503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.373711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.373742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.373867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.373899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.374112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.374145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.374285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.374316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.374430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.374461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.374603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.374636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.374829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.374862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.375055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.375088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.375277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.375307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.375492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.375525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.375722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.375754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.376003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.376036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.376158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.376190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.376327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.376358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.376474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.376506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.376694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.376728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.376913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.376943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.377068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.377098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.377241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.377272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.377464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.377497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.377693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.377727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.377859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.377891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.378145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.378179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.378312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.378343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.378464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.378503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.378715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.378748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.378884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.378914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.379188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.379220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.379368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.561 [2024-11-27 05:50:24.379572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.561 [2024-11-27 05:50:24.379605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.561 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.379790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.379822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.380014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.380046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.380160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.380191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.380318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.380350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.380483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.380515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.380661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.380702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.380895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.380928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.381131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.381163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.381367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.381400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.381524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.381557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.381750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.381782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.381978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.382010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.382189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.382219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.382425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.382458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.382583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.382616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.382774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.382807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.382990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.383022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.383135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.383165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.383301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.383333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.383447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.383478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.383607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.383639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.383858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.383891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.384006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.384035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.384227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.384261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.384512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.384798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.384831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.384966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.384999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.385224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.385256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.385395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.385428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.385620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.385651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.385793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.385824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.385964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.385995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.386115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.386147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.386274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.386305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.386419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.386457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.386592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.386622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.386748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.386782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.386919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.386952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.387159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.387191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.387316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.387349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.387492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.387523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.387630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.387662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.387807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.387838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.387985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.388019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.388160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.388190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.388313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.388344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.388454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.388485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.388603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.388634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.388905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.388938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.389234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.389267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.389538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.389570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.389724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.389759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.389883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.389917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.390049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.390080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.390229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.390262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.390480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.390511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.390632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.562 [2024-11-27 05:50:24.390663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.562 qpair failed and we were unable to recover it. 00:28:36.562 [2024-11-27 05:50:24.390851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.390882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.391077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.391108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.391360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.391394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.391574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.391606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.391770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.391812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.391939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.391971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.392169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.392201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.392320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.392350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.392543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.392576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.392701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.392733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.392919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.392949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.393133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.393164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.393439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.393471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.393608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.393640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.393758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.393790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.393993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.394022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.394225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.394257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.394402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.394438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.394633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.394664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.394866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.394898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.395043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.395074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.395275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.395312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.395587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.395621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.395826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.395858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.396043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.396073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.396190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.396221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.396354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.396384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.396507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.396538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.396653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.396694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.396823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.396854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.397103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.397136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.397306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.397338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.397459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.397491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.397694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.397728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.397922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.397956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.398069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.398100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.398274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.398306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.398461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.398493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.398605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.398635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.398794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.398827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.399023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.399054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.399282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.399314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.399536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.399568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.399703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.399736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.399876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.399907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.400092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.400125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.400245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.400278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.400398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.400429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.400698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.400731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.400853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.400884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.401006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.401037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.401145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.401177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.401296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.401328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.401444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.401474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.401682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.401716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.401839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.401871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.401999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.402029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.402157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.402197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.402328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.402360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.402501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.402531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.402654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.402711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.402889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.402921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.403050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.403082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.403303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.403334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.403521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.403554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.403681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.403716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.403906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.403939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.404087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.404122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.404253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.404284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.404467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.563 [2024-11-27 05:50:24.404499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.563 qpair failed and we were unable to recover it. 00:28:36.563 [2024-11-27 05:50:24.404687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.404721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.404860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.404891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.405015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.405046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.405228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.405260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.405398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.405429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.405561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.405592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.405780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.405812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.406001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.406035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.406164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.406196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.406305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.406339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.406477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.406509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.406624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.406656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.406816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.406849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.406965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.406996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.407112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.407144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.407320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.407353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.407486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.407517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.407717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.407750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.408060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.408091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.408220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.408250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.408380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.408410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.408541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.408573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.408703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.408734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.408860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.408894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.409091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.409122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.409257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.409287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.409469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.409499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.409622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.409659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.409782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.409813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.410001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.410031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.410167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.410199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.410398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.410430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.410555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.410586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.410709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.410741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.410857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.410889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.411105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.411136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.411249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.411281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.411408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.411441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.411572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.411602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.411802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.411832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.412023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.412058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.412266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.412298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.412423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.412454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.412558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.412588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.412715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.412748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.412858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.412890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.413020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.413174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.413203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.413330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.413360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.413541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.413573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.413757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.413789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.413914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.413945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.414054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.414085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.414210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.414240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.414438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.414469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.414582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.414614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.414798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.414832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.414948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.414979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.415178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.415208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.415342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.415373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.415552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.415584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.415760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.415790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.415889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.415917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.416037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.416065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.416181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.416209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.416323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.416352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.416478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.416506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.416686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.416722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.416909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.416938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.417132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.417162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.417290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.417318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.564 qpair failed and we were unable to recover it. 00:28:36.564 [2024-11-27 05:50:24.417422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.564 [2024-11-27 05:50:24.417449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.417640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.417696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.417883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.417911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.418032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.418061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.418174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.418201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.418378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.418407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.418509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.418536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.418720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.418750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.418867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.418898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.419072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.419100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.419285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.419314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.419423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.419452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.419576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.419603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.419717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.419747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.419955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.419985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.420224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.420253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.420438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.420467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.420579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.420606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.420795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.420825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.421045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.421073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.421256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.421284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.421403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.421432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.421547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.421574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.421749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.421780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.421903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.421931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.422032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.422062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.422326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.422356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.422538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.422567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.422736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.422765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.422880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.422908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.423092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.423121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.423251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.423279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.423393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.423420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.423518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.423545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.423661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.423699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.423878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.423907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.424017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.424051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.424166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.424194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.424309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.424338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.424511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.424540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.424651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.424688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.424823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.424852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.425023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.425051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.425238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.425266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.425474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.425503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.425627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.425657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.425915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.425945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.426062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.426091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.426258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.426286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.426420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.426448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.426567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.426596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.426793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.426921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.426950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.427074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.427103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.427210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.427238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.427346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.427375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.427479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.427507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.427613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.427643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.427901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.427973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.428158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.428195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.428349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.428381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.428576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.428608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.428815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.428849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.428978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.429009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.565 qpair failed and we were unable to recover it. 00:28:36.565 [2024-11-27 05:50:24.429204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.565 [2024-11-27 05:50:24.429236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.429355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.429386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.429662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.429704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.429891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.429923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.430048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.430079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.430267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.430297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.430422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.430453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.430576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.430608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.430722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.430754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.430948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.430979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.431110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.431141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.431314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.431345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.431524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.431561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.431740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.431775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.431985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.432016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.432195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.432227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.432344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.432376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.432481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.432512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.432624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.432655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.432846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.432878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.432997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.433029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.433219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.433249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.433440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.433471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.433692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.433725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.433919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.433950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.434067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.434098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.434289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.434321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.434430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.434462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.434636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.434667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.434818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.434852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.434964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.434996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.435113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.435143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.435255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.435287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.435469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.435501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.435610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.435641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.435779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.435818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.435928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.435959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.436089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.436120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.436302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.436334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.436528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.436560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.436689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.436722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.436846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.436878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.436992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.437024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.437133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.437164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.437423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.437456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.437660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.437703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.437985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.438016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.438125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.438157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.438272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.438304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.438443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.438474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.438591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.438622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.438738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.438769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.438875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.438911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.439048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.439079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.439251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.439283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.439460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.439492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.439595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.439627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.439766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.439798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.439983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.440015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.440143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.440174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.440306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.440337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.440446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.440478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.440608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.440639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.440762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.440795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.441000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.441031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.441157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.441189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.441315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.441347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.441461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.441493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.441600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.441632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.441752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.441784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.441966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.441998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.566 [2024-11-27 05:50:24.442128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.566 [2024-11-27 05:50:24.442159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.566 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.442334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.442364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.442475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.442506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.442626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.442658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.442770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.442802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.442906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.442937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.443061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.443092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.443212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.443244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.443405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.443477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.443612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.443647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.443853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.443887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.444127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.444159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.444266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.444298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.444481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.444512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.444615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.444646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.444795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.444827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.444939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.444970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.445090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.445122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.445230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.445262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.445447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.445478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.445605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.445635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.445761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.445794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.445929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.445961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.446072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.446102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.446278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.446309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.446492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.446523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.446642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.446685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.446803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.446834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.447032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.447063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.447241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.447272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.447393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.447425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.447607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.447655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.447854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.447886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.448086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.448118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.448302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.448334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.448512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.448550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.448661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.448702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.448890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.448922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.449035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.449066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.449283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.449315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.449431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.449461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.449582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.449613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.449864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.449897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.450029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.450061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.450249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.450280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.450396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.450427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.450613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.450644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.450868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.450900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.451036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.451067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.451195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.451225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.451343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.451374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.451553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.451584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.451754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.451787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.452125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.452157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.452313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.452344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.452466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.452497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.452620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.452651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.452770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.452801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.452912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.452943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.453134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.453165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.453274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.453305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.453506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.453536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.453678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.453717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.567 [2024-11-27 05:50:24.453893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.567 [2024-11-27 05:50:24.453924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.567 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.454064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.454096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.454353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.454385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.454560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.454591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.454726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.454758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.455026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.455056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.455202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.455232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.455363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.455394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.455570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.455601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.455782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.455814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.455935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.455967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.456098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.456129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.456303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.456334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.456476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.456507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.456697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.456730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.456851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.456882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.457074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.457105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.457314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.457346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.457593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.457624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.457736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.457768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.457944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.457976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.458129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.458160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.458360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.458391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.458576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.458607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.458786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.458819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.458934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.458965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.459205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.459236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.459444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.459476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.459679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.459712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.459847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.459878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.460065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.460097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.460271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.460301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.460420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.460451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.460572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.460606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.460737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.460769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.460872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.460904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.461036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.461068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.461185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.461217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.461320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.461352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.461592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.461623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.461764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.461796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.461969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.462000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.462104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.462135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.462317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.462348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.462585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.462615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.462790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.462822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.462956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.462986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.463108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.463139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.463240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.463272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.463453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.463484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.463601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.463631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.463762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.463795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.463912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.463943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.464068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.464100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.464290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.464320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.464427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.464458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.464630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.464661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.464794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.464825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.465008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.465039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.465306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.465336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.465456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.465486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.465611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.465642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.465823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.465854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.466043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.466074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.466268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.466299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.466532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.466563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.466684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.466716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.466916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.466953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.467056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.467086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.467267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.467298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.467485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.568 [2024-11-27 05:50:24.467516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.568 qpair failed and we were unable to recover it. 00:28:36.568 [2024-11-27 05:50:24.467650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.467690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.467824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.467854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.468098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.468130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.468252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.468282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.468457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.468488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.468654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.468721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.468848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.468880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.469048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.469080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.469271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.469302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.469508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.469539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.469742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.469774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.469951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.469982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.470190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.470221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.470396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.470426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.470554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.470585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.470773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.470805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.470999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.471029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.471219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.471250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.471424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.471455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.471637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.471668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.471796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.471826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.472099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.472130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.472314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.472346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.472469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.472505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.472717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.472750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.472925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.472956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.473139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.473171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.473291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.473322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.473496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.473527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.473731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.473763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.473937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.473969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.474085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.474116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.474240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.474272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.474539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.474570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.474773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.474804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.474972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.475003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.475180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.475211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.475320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.475351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.475521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.475551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.475666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.475706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.475824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.475854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.476037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.476068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.476185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.476217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.476337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.476369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.476545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.476576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.476721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.476755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.476949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.476980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.477090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.477122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.477398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.477429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.477693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.477725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.477851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.477907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.478105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.478136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.478321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.478353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.478527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.478557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.478744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.569 [2024-11-27 05:50:24.478776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.569 qpair failed and we were unable to recover it. 00:28:36.569 [2024-11-27 05:50:24.478947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.478978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.479159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.479190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.479363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.479394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.479509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.479540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.479803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.479834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.480025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.480057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.480247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.480278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.480387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.480418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.480754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.480786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.480930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.480962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.481136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.481167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.481272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.481302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.481483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.481514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.481634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.481665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.481904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.481936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.482113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.482144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.482256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.482288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.482459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.482490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.482614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.482645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.482788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.482820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.483013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.483044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.483230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.483261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.483510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.483541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.483744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.483776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.483877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.483908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.484027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.484058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.484245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.484275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.484394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.484425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.484701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.484733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.484904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.484934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.485110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.485142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.485330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.485362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.485548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.485580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.485716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.485747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.485886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.485917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.486040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.486071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.486282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.486314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.486430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.486462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.486657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.486699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.486871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.570 [2024-11-27 05:50:24.486902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.570 qpair failed and we were unable to recover it. 00:28:36.570 [2024-11-27 05:50:24.487076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.487107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.487244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.487276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.487395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.487427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.487692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.487725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.487853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.487884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.488001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.488032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.488204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.488234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.488420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.488451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.488629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.488660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.488814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.488845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.489028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.489059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.489230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.489261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.489377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.489408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.489592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.489623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.489819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.489851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.490021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.490052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.490177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.490208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.490379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.490409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.490660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.490699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.490827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.490858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.490985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.491015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.491131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.491161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.491279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.491310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.491500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.491536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.491713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.491745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.491938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.491969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.492146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.492177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.492418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.492449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.492616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.492647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.492826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.492857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.493055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.493086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.493275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.493305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.493491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.493521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.493637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.493678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.493854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.493884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.494097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.494128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.494321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.494352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.494481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.494512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.494630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.494661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.494911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.494943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.495184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.495214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.495453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.495484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.495596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.495627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.495900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.495935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.496129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.496159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.496347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.571 [2024-11-27 05:50:24.496378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.571 qpair failed and we were unable to recover it. 00:28:36.571 [2024-11-27 05:50:24.496566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.496596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.496788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.496821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.497112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.497142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.497316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.497347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.497571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.497608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.497742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.497775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.498023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.498055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.498310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.498341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.498523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.498554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.498753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.498784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.499003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.499034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.499213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.499244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.499361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.499391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.499514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.499544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.499811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.499844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.499965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.499996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.500236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.500267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.500446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.500477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.500601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.500632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.500780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.500812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.500919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.500950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.501124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.501155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.501289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.501320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.501443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.501474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.501716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.501749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.501984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.502015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.502181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.502212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.502398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.502428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.502530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.502560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.502734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.502766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.502954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.502986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.503250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.503282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.503526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.503557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.503752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.503783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.503907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.503938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.504128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.504159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.504350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.504381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.504550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.504581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.504708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.504740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.504921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.504952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.505070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.505101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.505367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.505398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.505690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.505723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.505920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.505951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.506087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.506118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.506260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.506291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.506394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.506425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.506628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.506659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.506790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.572 [2024-11-27 05:50:24.506822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.572 qpair failed and we were unable to recover it. 00:28:36.572 [2024-11-27 05:50:24.507007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.507038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.507178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.507209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.507392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.507423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.507543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.507574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.507835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.507868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.508053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.508084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.508293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.508324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.508429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.508460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.508650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.508689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.508897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.508927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.509139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.509170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.509360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.509390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.509558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.509589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.509693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.509725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.509914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.509944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.510151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.510182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.510384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.510416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.510603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.510633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.510771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.510802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.573 [2024-11-27 05:50:24.511009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.573 [2024-11-27 05:50:24.511040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.573 qpair failed and we were unable to recover it. 00:28:36.860 [2024-11-27 05:50:24.511229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.860 [2024-11-27 05:50:24.511261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.860 qpair failed and we were unable to recover it. 00:28:36.860 [2024-11-27 05:50:24.511455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.860 [2024-11-27 05:50:24.511486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.860 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.511682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.511716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.511907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.511944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.512117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.512147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.512339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.512371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.512542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.512572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.512777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.512809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.512980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.513182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.513213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.513348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.513379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.513568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.513598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.513769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.513802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.513932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.513962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.514093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.514124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.514240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.514271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.514455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.514486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.514603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.514634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.514861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.514894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.515078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.515110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.515219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.515250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.515367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.515398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.515530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.515561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.515825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.515856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.516041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.516072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.516249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.516401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.516432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.516697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.516730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.516916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.516947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.517131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.517162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.517273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.517310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.517488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.517520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.517649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.517690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.517933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.517964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.518084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.518115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.518320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.518350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.518480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.518510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.861 qpair failed and we were unable to recover it. 00:28:36.861 [2024-11-27 05:50:24.518715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.861 [2024-11-27 05:50:24.518747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.518861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.518892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.519071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.519101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.519284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.519314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.519556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.519588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.519761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.519793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.519984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.520015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.520228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.520258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.520446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.520477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.520608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.520639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.520861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.520932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.521253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.521322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.521590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.521624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.521776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.521812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.521984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.522015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.522161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.522193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.522405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.522438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.522608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.522640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.522779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.522814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.523069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.523101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.523278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.523314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.523556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.523587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.523709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.523741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.523931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.523963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.524223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.524254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.524386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.524417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.524527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.524557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.524686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.524719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.524902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.524933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.525050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.525080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.525206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.525237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.525409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.525440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.525555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.525586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.525694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.862 [2024-11-27 05:50:24.525726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.862 qpair failed and we were unable to recover it. 00:28:36.862 [2024-11-27 05:50:24.525914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.525946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.526187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.526218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.526435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.526467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.526591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.526622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.526822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.526854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.526986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.527017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.527149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.527298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.527329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.527502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.527533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.527706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.527738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.528021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.528053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.528176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.528207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.528312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.528343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.528603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.528640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.528838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c34b20 is same with the state(6) to be set 00:28:36.863 [2024-11-27 05:50:24.529132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.529213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.529430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.529467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.529656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.529703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.529888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.529920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.530089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.530119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.530303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.530334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.530510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.530542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.530722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.530755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.530860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.530891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.531092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.531124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.531225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.531257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.531364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.531395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.531584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.531625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.531910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.532039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.532071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.532263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.532294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.532483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.532515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.532756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.532788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.863 [2024-11-27 05:50:24.532990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.863 [2024-11-27 05:50:24.533022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.863 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.533208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.533239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.533356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.533387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.533554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.533585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.533707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.533740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.533924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.533956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.534130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.534161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.534335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.534366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.534496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.534528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.534792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.534826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.534957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.534988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.535229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.535262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.535391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.535422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.535719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.535752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.535925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.535956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.536142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.536173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.536356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.536387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.536605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.536636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.536829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.536861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.537098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.537129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.537263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.537295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.537417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.537449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.537577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.537607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.537782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.537815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.537961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.537992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.538110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.538141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.538262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.538293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.538468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.538500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.538617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.538648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.538942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.538982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.539103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.539135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.539256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.539287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.539453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.539483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.539681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.539714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.539830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.864 [2024-11-27 05:50:24.539867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.864 qpair failed and we were unable to recover it. 00:28:36.864 [2024-11-27 05:50:24.540135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.540166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.540414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.540447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.540695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.540729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.540865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.540897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.541157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.541189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.541321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.541352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.541534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.541566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.541764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.541797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.541981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.542013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.542196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.542228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.542406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.542437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.542633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.542664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.542866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.542898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.543087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.543119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.543314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.543345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.543445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.543476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.543608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.543640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.543771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.543814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.543940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.543974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.544216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.544247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.544357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.544389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.544559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.544591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.544702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.544735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.544911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.544943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.545120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.545151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.545265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.545297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.545568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.545600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.545787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.545820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.546037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.546069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.546194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.546225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.546503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.546535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.865 qpair failed and we were unable to recover it. 00:28:36.865 [2024-11-27 05:50:24.546807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.865 [2024-11-27 05:50:24.546840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.547055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.547086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.547275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.547306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.547438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.547470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.547736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.547768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.547979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.548010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.548127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.548157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.548422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.548455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.548638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.548683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.548808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.548840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.549062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.549093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.549272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.549303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.549477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.549508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.549723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.549756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.549893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.549925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.550105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.550136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.550437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.550613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.550645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.550899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.550931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.551193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.551224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.551325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.551357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.551472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.551503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.551768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.551801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.551944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.551975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.552164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.552195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.552378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.552410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.552532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.552563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.552687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.552719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.552825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.866 [2024-11-27 05:50:24.552856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.866 qpair failed and we were unable to recover it. 00:28:36.866 [2024-11-27 05:50:24.553025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.553057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.553223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.553254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.553391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.553422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.553543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.553575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.553765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.553798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.553983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.554015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.554124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.554155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.554274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.554306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.554415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.554447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.554564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.554596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.554710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.554742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.554854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.554885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.555055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.555086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.555276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.555308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.555568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.555599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.555792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.555824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.556108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.556139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.556375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.556407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.556595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.556626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.556823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.556860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.556983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.557014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.557220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.557251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.557382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.557413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.557609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.557640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.557894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.557926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.558188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.558219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.558346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.558377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.558549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.558579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.558792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.558824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.559014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.559045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.559163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.559194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.559408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.559440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.867 [2024-11-27 05:50:24.559622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.867 [2024-11-27 05:50:24.559653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.867 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.559876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.559907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.560092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.560122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.560309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.560340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.560503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.560534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.560677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.560710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.560910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.560941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.561113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.561144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.561314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.561345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.561471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.561502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.561760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.561792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.561918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.561949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.562188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.562218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.562409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.562441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.562561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.562592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.562782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.562815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.562997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.563028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.563211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.563242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.563547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.563578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.563818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.563851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.564122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.564154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.564277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.564308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.564497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.564528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.564716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.564748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.564883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.564914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.565021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.565052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.565227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.565259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.565530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.565566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.565832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.565864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.565976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.566008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.566104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.566135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.566239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.566270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.566459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.566490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.566611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.566643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.566826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.566897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.567039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.567076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.868 [2024-11-27 05:50:24.567259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.868 [2024-11-27 05:50:24.567292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.868 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.567557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.567590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.567693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.567726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.567850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.567881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.568080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.568112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.568403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.568434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.568628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.568660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.568847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.568878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.569075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.569107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.569347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.569379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.569597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.569629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.569813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.569844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.570030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.570062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.570176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.570206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.570380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.570411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.570613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.570645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.570859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.570891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.571021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.571051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.571297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.571329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.571464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.571495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.571613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.571644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.571848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.571879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.572056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.572088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.572277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.572309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.572497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.572527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.572654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.572697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.572883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.572915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.573151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.573182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.573365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.573397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.573566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.573597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.573793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.573826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.574026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.574063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.574249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.574279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.574463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.869 [2024-11-27 05:50:24.574495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.869 qpair failed and we were unable to recover it. 00:28:36.869 [2024-11-27 05:50:24.574600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.574632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.574911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.574944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.575055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.575085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.575223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.575254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.575445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.575476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.575649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.575689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.575863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.575893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.576001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.576032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.576136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.576166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.576345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.576376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.576656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.576696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.576890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.576921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.577088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.577120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.577222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.577252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.577502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.577533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.577664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.577706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.577838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.577868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.578042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.578073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.578202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.578232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.578370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.578401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.578505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.578537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.578738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.578772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.578902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.578933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.579110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.579142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.579317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.579352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.579521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.579552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.579788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.579821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.580005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.580036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.580158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.580188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.580355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.580387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.580498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.580530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.580666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.580712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.580904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.580935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.870 [2024-11-27 05:50:24.581080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.870 [2024-11-27 05:50:24.581111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.870 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.581274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.581304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.581489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.581520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.581760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.581792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.581969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.582007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.582127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.582158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.582291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.582322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.582492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.582522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.582765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.582796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.582925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.582956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.583132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.583163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.583279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.583310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.583493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.583524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.583765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.583798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.584063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.584094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.584237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.584268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.584392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.584423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.584546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.584577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.584708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.584739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.584918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.584948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.585055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.585086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.585272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.585303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.585438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.585469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.585566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.585597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.585774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.585804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.585934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.585966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.586146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.586177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.871 [2024-11-27 05:50:24.586306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.871 [2024-11-27 05:50:24.586337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.871 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.586473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.586504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.586620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.586651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.586848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.586880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.587059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.587129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.587330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.587366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.587560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.587592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.587819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.587853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.587972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.588003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.588212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.588242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.588430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.588461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.588641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.588684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.588882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.588912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.589041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.589072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.589263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.589294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.589504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.589535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.589661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.589700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.589883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.589914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.590055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.590087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.590370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.590401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.590606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.590637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.590885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.590918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.591109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.591140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.591315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.591347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.591519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.591549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.591784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.591966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.591997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.592187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.592217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.592485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.592523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.592725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.592758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.593006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.593037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.593218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.872 [2024-11-27 05:50:24.593255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.872 qpair failed and we were unable to recover it. 00:28:36.872 [2024-11-27 05:50:24.593463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.593494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.593613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.593645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.593895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.593927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.594054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.594084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.594214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.594244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.594493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.594524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.594666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.594772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.595014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.595045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.595229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.595258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.595446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.595475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.595751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.595784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.595904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.595935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.596062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.596094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.596309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.596341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.596514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.596546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.596667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.596706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.596888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.596919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.597052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.597083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.597201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.597231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.597334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.597366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.597539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.597570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.597679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.597710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.597848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.597878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.598001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.598031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.598211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.598241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.598450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.598480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.598611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.598645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.598776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.598807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.599000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.599030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.599151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.599180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.599369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.599398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.599604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.599633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.599821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.599853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.600027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.600058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.600175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.873 [2024-11-27 05:50:24.600204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.873 qpair failed and we were unable to recover it. 00:28:36.873 [2024-11-27 05:50:24.600307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.600336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.600599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.600630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.600753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.600785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.600972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.601003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.601235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.601266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.601458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.601489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.601662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.601703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.601940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.601971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.602217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.602248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.602436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.602467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.602656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.602696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.602938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.602970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.603207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.603237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.603426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.603456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.603566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.603595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.603710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.603742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.603947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.603979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.604168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.604199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.604330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.604361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.604655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.604698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.604904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.604934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.605037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.605066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.605250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.605279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.605470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.605500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.605668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.605706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.605958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.605989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.606100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.606129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.606310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.606339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.606465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.606495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.606761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.606791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.607035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.607067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.607169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.607198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.607374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.607406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.607511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.607542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.607664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.874 [2024-11-27 05:50:24.607716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.874 qpair failed and we were unable to recover it. 00:28:36.874 [2024-11-27 05:50:24.607843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.607874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.608114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.608145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.608317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.608347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.608545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.608575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.608769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.608801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.608917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.608945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.609146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.609177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.609371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.609403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.609570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.609600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.609783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.609815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.609932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.609963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.610088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.610119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.610241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.610272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.610509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.610540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.610711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.610743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.610853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.610884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.611065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.611096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.611217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.611248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.611419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.611449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.611570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.611600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.611790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.611822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.611941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.611971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.612084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.612113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.612304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.612334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.612521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.612557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.612732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.612764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.613004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.613035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.613219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.613251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.613492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.613522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.613710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.613742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.613875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.614094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.614124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.614243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.614275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.614447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.614479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.614731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.875 [2024-11-27 05:50:24.614763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.875 qpair failed and we were unable to recover it. 00:28:36.875 [2024-11-27 05:50:24.614867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.614896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.615132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.615163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.615344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.615376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.615588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.615619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.615809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.615841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.615975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.616006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.616112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.616142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.616311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.616341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.616538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.616568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.616701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.616733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.616862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.616894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.617014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.617045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.617165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.617195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.617380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.617411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.617602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.617632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.617932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.617963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.618135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.618172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.618357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.618388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.618609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.618639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.618917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.618949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.619135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.619166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.619341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.619372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.619497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.619527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.619700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.619739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.619856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.619886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.620150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.620181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.620284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.620313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.620553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.620584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.620796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.620829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.876 qpair failed and we were unable to recover it. 00:28:36.876 [2024-11-27 05:50:24.621127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.876 [2024-11-27 05:50:24.621158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.621278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.621308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.621543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.621574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.621757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.621789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.621990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.622021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.622201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.622233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.622402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.622433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.622676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.622708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.622880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.622911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.623036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.623067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.623255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.623286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.623473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.623504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.623762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.623799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.623984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.624015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.624140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.624176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.624352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.624382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.624638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.624681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.624872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.624903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.625098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.625129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.625390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.625420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.625532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.625567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.625703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.625736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.625941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.625972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.626145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.626176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.626363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.626395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.626607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.626638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.626778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.626810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.626988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.627019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.627135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.627166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.627337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.627368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.627630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.627661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.627876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.627906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.628093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.628123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.628316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.628347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.877 qpair failed and we were unable to recover it. 00:28:36.877 [2024-11-27 05:50:24.628556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.877 [2024-11-27 05:50:24.628587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.628793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.628825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.629112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.629143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.629442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.629472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.629739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.629771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.629889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.629920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.630116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.630147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.630289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.630320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.630505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.630537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.630677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.630710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.630892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.630922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.631112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.631143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.631321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.631351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.631619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.631649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.631788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.631819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.631938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.631969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.632161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.632191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.632303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.632333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.632506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.632537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.632724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.632755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.632991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.633021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.633197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.633228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.633438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.633469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.633711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.633742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.633954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.633986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.634197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.634228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.634460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.634490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.634691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.634724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.634897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.634928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.635155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.635185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.635381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.635412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.635585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.635615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.635767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.878 [2024-11-27 05:50:24.635800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.878 qpair failed and we were unable to recover it. 00:28:36.878 [2024-11-27 05:50:24.635918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.635948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.636211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.636242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.636435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.636466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.636706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.636738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.636918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.636949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.637068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.637099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.637331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.637362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.637552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.637583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.637709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.637741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.637924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.637955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.638252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.638283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.638403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.638434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.638627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.638659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.638850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.638880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.639002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.639032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.639212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.639248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.639513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.639544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.639719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.639751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.639922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.639953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.640123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.640155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.640333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.640363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.640622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.640653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.640831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.640863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.641050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.641081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.641276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.641308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.641563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.641594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.641839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.641871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.642056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.642087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.642281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.642312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.642515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.642547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.642664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.642704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.642958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.642989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.643255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.879 [2024-11-27 05:50:24.643286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.879 qpair failed and we were unable to recover it. 00:28:36.879 [2024-11-27 05:50:24.643395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.643425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.643596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.643627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.643837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.643869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.644037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.644068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.644248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.644279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.644415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.644446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.644697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.644728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.644899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.644930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.645112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.645143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.645382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.645422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.645597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.645628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.645771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.645804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.645919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.645950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.646075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.646105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.646292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.646322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.646507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.646538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.646804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.646836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.646950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.646980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.647150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.647181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.647321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.647352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.647567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.647597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.647733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.647766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.647885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.647916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.648162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.648193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.648386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.648417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.648688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.648721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.648837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.648868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.649058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.649089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.649267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.649298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.649481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.649512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.649688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.649719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.649838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.649869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.880 [2024-11-27 05:50:24.650055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.880 [2024-11-27 05:50:24.650085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.880 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.650256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.650287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.650484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.650515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.650756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.650788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.650963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.651001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.651210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.651240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.651449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.651480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.651680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.651713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.651923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.651954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.652203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.652234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.652494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.652525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.652739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.652771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.652961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.652991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.653208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.653239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.653421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.653451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.653661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.653701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.653833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.653864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.654128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.654158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.654421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.654452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.654571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.654602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.654790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.654822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.654947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.654978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.655238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.655269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.655479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.655510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.655703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.655735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.655931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.655962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.656068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.656098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.656278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.656309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.656492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.656523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.656742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.656774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.657035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.657065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.881 qpair failed and we were unable to recover it. 00:28:36.881 [2024-11-27 05:50:24.657249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.881 [2024-11-27 05:50:24.657279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.657410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.657441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.657654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.657694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.657959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.657989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.658205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.658235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.658491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.658523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.658712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.658743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.658914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.658946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.659138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.659169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.659407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.659438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.659552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.659583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.659846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.659878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.660060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.660091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.660208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.660238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.660428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.660465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.660663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.660704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.660874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.660905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.661091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.661122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.661313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.661343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.661527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.661558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.661751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.661783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.661956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.661986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.662228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.662259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.662441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.662472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.662658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.662702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.662889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.662920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.663035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.663066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.663238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.663269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.663450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.663481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.663606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.663637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.663865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.663898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.664159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.664189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.664305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.664336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.664453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.882 [2024-11-27 05:50:24.664484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.882 qpair failed and we were unable to recover it. 00:28:36.882 [2024-11-27 05:50:24.664655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.664697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.664894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.664924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.665100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.665131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.665261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.665292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.665549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.665580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.665710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.665743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.665872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.665902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.666145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.666182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.666321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.666353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.666555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.666585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.666762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.666794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.666986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.667017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.667210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.667240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.667429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.667460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.667698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.667730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.667968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.668000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.668122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.668152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.668267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.668297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.668399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.668430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.668679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.668711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.668885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.668916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.669207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.669238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.669357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.669387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.669642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.669682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.669817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.669848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.669973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.670003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.670210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.670241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.670476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.670507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.670693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.670726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.670964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.670995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.671180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.671211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.671406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.671437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.671572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.671603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.671811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.671844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.671967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.883 [2024-11-27 05:50:24.672004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.883 qpair failed and we were unable to recover it. 00:28:36.883 [2024-11-27 05:50:24.672199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.672230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.672511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.672543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.672791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.672822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.672999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.673030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.673175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.673205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.673386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.673417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.673605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.673636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.673878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.673909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.674084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.674115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.674254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.674285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.674524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.674554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.674690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.674723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.674909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.674940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.675133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.675164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.675281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.675312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.675485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.675516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.675710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.675744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.675854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.675885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.676070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.676101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.676220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.676251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.676426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.676457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.676566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.676595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.676769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.676801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.676932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.676963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.677231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.677262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.677505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.677536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.677736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.677768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.677973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.678005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.678208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.678238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.678503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.678534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.678684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.678716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.678853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.678884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.679064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.679094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.679260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.679291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.884 [2024-11-27 05:50:24.679465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.884 [2024-11-27 05:50:24.679496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.884 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.679713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.679746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.680055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.680086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.680278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.680310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.680434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.680464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.680645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.680687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.680884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.680915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.681100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.681131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.681404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.681435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.681609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.681640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.681783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.681815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.682052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.682082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.682271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.682302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.682411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.682443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.682630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.682660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.682846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.682877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.683048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.683079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.683318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.683349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.683636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.683667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.683852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.683883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.684126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.684157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.684288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.684319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.684590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.684621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.684919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.685122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.685152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.685273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.685304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.685489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.685520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.685774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.685807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.686002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.686033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.686150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.686181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.686386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.686418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.686597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.686627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.686815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.686846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.687036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.885 [2024-11-27 05:50:24.687073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.885 qpair failed and we were unable to recover it. 00:28:36.885 [2024-11-27 05:50:24.687249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.687279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.687487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.687518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.687646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.687693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.687811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.687842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.688031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.688062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.688184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.688214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.688396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.688427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.688595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.688625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.688870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.688903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.689088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.689119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.689353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.689384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.689564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.689595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.689769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.689801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.689939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.689971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.690142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.690173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.690360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.690392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.690584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.690615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.690829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.690861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.691048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.691079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.691253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.691284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.691523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.691553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.691744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.691776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.692028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.692059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.692295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.692326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.692458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.692489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.692688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.692720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.692925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.692961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.693171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.693201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.693400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.693431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.693630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.886 [2024-11-27 05:50:24.693661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.886 qpair failed and we were unable to recover it. 00:28:36.886 [2024-11-27 05:50:24.693917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.693948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.694143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.694174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.694369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.694400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.694574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.694604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.694863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.694895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.695068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.695234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.695265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.695448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.695479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.695717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.695748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.695920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.695950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.696126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.696157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.696325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.696356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.696542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.696573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.696817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.696848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.697031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.697061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.697235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.697266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.697439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.697469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.697596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.697626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.697814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.697845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.698085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.698116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.698287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.698317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.698578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.698609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.698744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.698785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.699049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.699080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.699317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.699350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.699601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.699633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.699848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.699881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.700119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.700149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.700359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.700390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.700645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.700685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.700827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.700857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.701032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.701064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.887 [2024-11-27 05:50:24.701250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.887 [2024-11-27 05:50:24.701281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.887 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.701478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.701508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.701699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.701732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.701905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.701936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.702123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.702153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.702446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.702477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.702693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.702726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.702900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.702931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.703193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.703224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.703351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.703383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.703566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.703597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.703770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.703801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.704051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.704082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.704327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.704358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.704545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.704576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.704804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.704836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.705124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.705155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.705337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.705368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.705486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.705517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.705696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.705727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.705918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.705950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.706069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.706099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.706280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.706310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.706413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.706444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.706611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.706641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.706911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.706943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.707093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.707123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.707243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.707273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.707462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.707493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.707613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.707643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.707876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.707948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.708224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.708259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.708450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.708492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.708699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.708734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.708911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.708943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.709071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.709101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.888 qpair failed and we were unable to recover it. 00:28:36.888 [2024-11-27 05:50:24.709226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.888 [2024-11-27 05:50:24.709256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.709445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.709476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.709690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.709723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.709911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.709942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.710055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.710085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.710252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.710283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.710453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.710484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.710707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.710740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.710986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.711018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.711210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.711241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.711482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.711514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.711710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.711742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.711852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.711882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.712020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.712051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.712251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.712282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.712532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.712564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.712740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.712774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.713049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.713081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.713257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.713289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.713415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.713446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.713696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.713728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.713844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.713877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.714068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.714100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.714238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.714269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.714513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.714544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.714730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.714762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.714960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.714990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.715166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.715196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.715462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.715494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.715701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.715734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.715869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.715900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.716002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.716032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.716201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.716232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.716439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.716471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.716667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.889 [2024-11-27 05:50:24.716706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.889 qpair failed and we were unable to recover it. 00:28:36.889 [2024-11-27 05:50:24.716885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.716916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.717184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.717221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.717458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.717489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.717608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.717640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.717830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.717864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.718060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.718092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.718221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.718252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.718383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.718414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.718603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.718635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.718768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.718800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.719007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.719039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.719227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.719258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.719464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.719495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.719630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.719662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.719796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.719828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.720021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.720052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.720187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.720218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.720463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.720495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.720681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.720714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.720900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.720931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.721130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.721161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.721381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.721412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.721612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.721644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.721938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.721970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.722076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.722107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.722295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.722327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.722509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.722541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.722725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.722758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.722943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.722975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.723214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.723246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.723416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.723447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.723570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.723601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.723796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.890 [2024-11-27 05:50:24.723828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.890 qpair failed and we were unable to recover it. 00:28:36.890 [2024-11-27 05:50:24.724020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.724050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.724171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.724202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.724402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.724434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.724608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.724639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.724835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.724867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.724986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.725017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.725258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.725289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.725501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.725532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.725775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.725819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.725957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.725989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.726167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.726198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.726431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.726463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.726656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.726701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.726956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.726988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.727250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.727282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.727479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.727509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.727694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.727726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.727902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.727934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.728137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.728168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.728433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.728465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.728726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.728757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.728956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.728987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.729196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.729227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.729497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.729528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.729742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.729775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.729910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.729941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.730123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.730154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.730349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.730380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.730572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.730602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.730789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.891 [2024-11-27 05:50:24.730821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.891 qpair failed and we were unable to recover it. 00:28:36.891 [2024-11-27 05:50:24.731000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.731031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.731274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.731306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.731570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.731602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.731786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.731820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.732050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.732083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.732278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.732309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.732534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.732566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.732817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.732850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.733093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.733124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.733296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.733327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.733568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.733600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.733737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.733770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.734032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.734063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.734192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.734223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.734402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.734435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.734611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.734643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.734779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.734812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.734950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.734982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.735156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.735192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.735408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.735440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.735611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.735642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.735850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.735882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.736072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.736105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.736294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.736326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.736431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.736463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.736647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.736688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.736822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.736854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.737034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.737065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.737329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.737361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.737563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.737595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.737721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.737754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.738019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.738052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.892 [2024-11-27 05:50:24.738302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.892 [2024-11-27 05:50:24.738334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.892 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.738524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.738555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.738806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.738838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.739080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.739111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.739373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.739404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.739588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.739620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.739871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.739904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.740022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.740054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.740342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.740372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.740562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.740595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.740791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.740824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.740994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.741026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.741196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.741228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.741407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.741439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.741556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.741586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.741759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.741792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.741925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.741956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.742194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.742226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.742433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.742465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.742650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.742691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.742911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.742942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.743202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.743233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.743414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.743446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.743689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.743721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.743909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.743941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.744048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.744079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.744199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.744236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.744497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.744528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.744647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.744685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.744871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.744903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.745142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.745173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.745300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.745331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.745515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.745547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.745741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.745774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.745951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.745983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.746085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.746117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.893 qpair failed and we were unable to recover it. 00:28:36.893 [2024-11-27 05:50:24.746306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.893 [2024-11-27 05:50:24.746337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.746516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.746548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.746729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.746761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.746916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.746948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.747072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.747104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.747369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.747401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.747503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.747534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.747706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.747738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.747923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.747955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.748088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.748119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.748324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.748355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.748527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.748559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.748823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.748854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.749098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.749130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.749345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.749376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.749502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.749533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.749712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.749744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.749879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.749910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.750163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.750193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.750365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.750396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.750591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.750623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.750868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.750901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.751034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.751065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.751248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.751280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.751465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.751497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.751605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.751637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.751829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.751862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.752062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.752093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.752277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.752308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.752436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.752468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.752731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.752769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.752895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.752927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.753206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.753238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.753365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.753398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.894 qpair failed and we were unable to recover it. 00:28:36.894 [2024-11-27 05:50:24.753513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.894 [2024-11-27 05:50:24.753544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.753737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.753769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.754034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.754066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.754238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.754270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.754554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.754586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.754706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.754739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.754935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.754967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.755252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.755283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.755533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.755565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.755807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.755839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.756030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.756061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.756180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.756212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.756355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.756386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.756574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.756605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.756725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.756757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.756939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.756970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.757167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.757198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.757464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.757496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.757615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.757646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.757883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.757915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.758180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.758211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.758338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.758369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.758518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.758549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.758749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.758782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.758975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.759006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.759265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.759297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.759478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.759509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.759778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.759810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.760118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.760149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.760269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.760301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.760428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.760459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.760642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.760683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.760867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.760897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.761085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.895 [2024-11-27 05:50:24.761116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.895 qpair failed and we were unable to recover it. 00:28:36.895 [2024-11-27 05:50:24.761239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.761271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.761391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.761422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.761594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.761631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.761773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.762063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.762095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.762218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.762249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.762423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.762454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.762628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.762659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.762799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.762831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.763030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.763061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.763252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.763284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.763405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.763436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.763631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.763662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.763962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.763994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.764190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.764222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.764397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.764429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.764616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.764647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.764781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.764813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.764995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.765026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.765285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.765317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.765503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.765535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.765730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.765763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.765948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.765980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.766181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.766212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.766379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.766411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.766606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.766637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.766781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.766814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.766946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.766977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.767244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.767276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.767515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.767585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.896 [2024-11-27 05:50:24.767785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.896 [2024-11-27 05:50:24.767823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.896 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.768006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.768038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.768227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.768258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.768433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.768465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.768592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.768623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.768900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.768934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.769205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.769236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.769477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.769509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.769748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.769781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.769912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.769943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.770133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.770164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.770268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.770299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.770494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.770534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.770747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.770779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.770965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.770996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.771244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.771276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.771493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.771524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.771645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.771684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.771876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.771907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.772094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.772126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.772334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.772366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.772555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.772587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.772778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.772810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.772962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.772994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.773115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.773145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.773320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.773350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.773561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.773592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.773802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.773836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.773970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.774001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.774193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.774224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.774356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.774387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.774643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.774683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.897 [2024-11-27 05:50:24.774876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.897 [2024-11-27 05:50:24.774908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.897 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.775122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.775154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.775322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.775353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.775487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.775519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.775694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.775726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.775906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.775937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.776178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.776209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.776405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.776437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.776617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.776648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.776920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.776951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.777154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.777186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.777376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.777406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.777700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.777733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.777929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.777961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.778199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.778231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.778414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.778446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.778575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.778606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.778843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.778876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.779066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.779097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.779221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.779253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.779370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.779406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.779597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.779628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.779884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.779916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.780167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.780199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.780404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.780435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.780627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.780657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.780797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.780829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.781097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.781128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.781368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.781400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.781504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.781536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.781731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.781763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.782016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.782048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.782184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.782215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.782331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.782362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.782557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.782589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.898 [2024-11-27 05:50:24.782831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.898 [2024-11-27 05:50:24.782863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.898 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.783041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.783073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.783255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.783287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.783462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.783494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.783664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.783704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.783824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.783855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.783980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.784013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.784136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.784167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.784350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.784381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.784568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.784600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.784840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.784872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.785057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.785089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.785215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.785248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.785436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.785467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.785588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.785620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.785763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.785796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.785915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.785947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.786066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.786097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.786309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.786341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.786551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.786583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.786794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.786826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.787010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.787042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.787242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.787274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.787403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.787434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.787691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.787724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.787981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.788018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.788146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.788177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.788348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.788379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.788569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.788601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.788831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.788864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.788983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.789015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.789196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.789228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.789449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.789585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.789616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.899 qpair failed and we were unable to recover it. 00:28:36.899 [2024-11-27 05:50:24.789802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.899 [2024-11-27 05:50:24.789835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.789955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.789986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.790255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.790287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.790490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.790522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.790730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.790763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.790904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.790935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.791114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.791388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.791420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.791620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.791652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.791922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.791954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.792086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.792117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.792364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.792395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.792570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.792601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.792838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.792870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.793041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.793071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.793182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.793213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.793501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.793532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.793636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.793666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.793784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.793816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.793941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.793973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.794215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.794247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.794386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.794417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.794587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.794618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.794816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.794849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.795043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.795074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.795317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.795348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.795475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.795507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.795693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.795726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.795909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.795940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.796184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.796214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.796331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.796363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.796549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.796586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.796779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.796811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.797052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.797083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.797266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.797298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.797537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.797568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.797826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.797860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.798039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.900 [2024-11-27 05:50:24.798069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.900 qpair failed and we were unable to recover it. 00:28:36.900 [2024-11-27 05:50:24.798190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.798221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.798387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.798420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.798596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.798627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.798767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.798799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.799010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.799042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.799225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.799256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.799375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.799406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.799586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.799618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.799806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.799844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.800140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.800172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.800297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.800328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.800467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.800498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.800794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.800827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.801031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.801062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.801309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.801340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.801552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.801582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.801848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.801880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.802005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.802037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.802278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.802309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.802496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.802528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.802719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.802752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.802934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.802966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.803239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.803271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.803387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.803418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.803606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.803639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.803863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.803895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.804107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.804139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.804242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.804273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.804393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.804424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.804578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.804610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.901 qpair failed and we were unable to recover it. 00:28:36.901 [2024-11-27 05:50:24.804755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.901 [2024-11-27 05:50:24.804788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.805073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.805104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.805299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.805331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.805567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.805709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.805742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.805931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.805962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.806143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.806176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.806358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.806389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.806569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.806599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.806724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.806757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.806937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.806969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.807145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.807176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.807346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.807377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.807564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.807595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.807729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.807761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.808054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.808086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.808257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.808288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.808558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.808589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.808700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.808734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.808885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.808916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.809056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.809087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.809197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.809229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.809403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.809434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.809645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.809716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.809938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.809969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.810095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.810127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.810323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.810355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.810541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.810572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.810765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.810798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.811041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.811073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.811267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.811299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.811567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.811598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.811795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.811827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.812022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.812054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.812329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.812361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.812534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.812565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.812741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.812773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.813012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.902 [2024-11-27 05:50:24.813044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.902 qpair failed and we were unable to recover it. 00:28:36.902 [2024-11-27 05:50:24.813227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.813259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.813377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.813408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.813531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.813562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.813732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.813763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.814027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.814059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.814254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.814290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.814423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.814454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.814576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.814607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.814719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.814751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.814875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.814906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.815169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.815201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.815374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.815405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.815529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.815561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.815800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.815832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.816095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.816125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.816248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.816280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.816450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.816483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.816688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.816721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.816893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.816925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.817103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.817134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.817317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.817348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.817586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.817617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.817819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.817852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.818121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.818152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.818365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.818397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.818587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.818618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.818870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.818902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.819004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.819035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.819158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.819189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.819444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.819476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.819719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.819751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.819865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.819896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.820077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.820118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.820300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.820330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.820462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.820494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.820734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.820765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.820949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.820980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.903 [2024-11-27 05:50:24.821149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.903 [2024-11-27 05:50:24.821180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.903 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.821384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.821416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.821592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.821623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.821815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.821848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.822035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.822067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.822250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.822281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.822546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.822578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.822768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.822800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.822978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.823008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.823202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.823233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.823366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.823398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.823586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.823617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.823809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.824044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.824076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.824188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.824219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.824336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.824368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.824487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.824518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.824752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.824784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.824912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.824943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.825184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.825214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.825407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.825438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.825705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.825738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.825919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.826123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.826154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.826338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.826369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.826557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.826588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.826777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.826810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.826983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.827015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.827197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.827228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.827339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.827371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.827548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.827579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.827768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.827799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.827946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.827978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.828224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.828255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.828514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.828544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.828657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.828706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.828903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.828934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.829123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.829154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.829329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.904 [2024-11-27 05:50:24.829360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.904 qpair failed and we were unable to recover it. 00:28:36.904 [2024-11-27 05:50:24.829555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.829586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.829765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.829798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.829982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.830012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.830190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.830222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.830394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.830426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.830553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.830585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.830809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.830842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.830970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.831001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.831184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.831217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.831322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.831354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.831545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.831576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.831816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.831849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.832037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.832069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.832236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.832267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.832534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.832565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.832784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.832816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.833009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.833041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.833235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.833266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.833465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.833497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.833684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.833717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.834026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.834058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.834197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.834228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.834358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.834389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.834534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.834566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.834758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.834791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:36.905 [2024-11-27 05:50:24.834979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.905 [2024-11-27 05:50:24.835011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:36.905 qpair failed and we were unable to recover it. 00:28:37.186 [2024-11-27 05:50:24.835146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.186 [2024-11-27 05:50:24.835177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.186 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.835299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.835330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.835452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.835481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.835656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.835694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.835890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.835920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.836112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.836142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.836347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.836378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.836533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.836563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.836689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.836720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.836962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.836992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.837273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.837309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.837433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.837463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.837565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.837595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.837785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.837816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.837976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.838006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.838127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.838157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.838410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.838439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.838705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.838738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.838917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.838949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.839142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.839172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.839364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.839395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.839636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.839667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.839853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.839884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.840058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.840089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.840279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.840311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.840428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.840459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.840744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.840777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.840978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.841010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.841227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.841259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.841441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.841472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.841608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.841639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.841887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.841918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.842209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.842240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.842360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.842389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.842688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.842721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.842970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.843001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.843237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.843268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.843532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.187 [2024-11-27 05:50:24.843563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.187 qpair failed and we were unable to recover it. 00:28:37.187 [2024-11-27 05:50:24.843684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.843716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.843953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.843984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.844253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.844284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.844528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.844559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.844691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.844722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.844905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.844937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.845063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.845093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.845212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.845240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.845440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.845472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.845696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.845729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.845909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.845940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.846063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.846092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.846193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.846227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.846409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.846440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.846606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.846637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.846832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.846864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.847123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.847154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.847292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.847322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.847582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.847613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.847807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.847838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.847968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.847998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.848116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.848146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.848316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.848346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.848525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.848557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.848820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.848852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.849035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.849066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.849190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.849221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.849392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.849424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.849615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.849646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.849896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.849927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.850040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.850069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.850307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.850337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.850507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.850538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.850802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.850834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.850954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.850983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.851114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.851144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.851347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.851378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.188 [2024-11-27 05:50:24.851575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.188 [2024-11-27 05:50:24.851606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.188 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.851819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.851850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.852038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.852069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.852240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.852270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.852507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.852539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.852792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.852823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.852944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.852975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.853188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.853219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.853402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.853433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.853550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.853580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.853709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.853742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.853933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.853965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.854084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.854115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.854377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.854409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.854527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.854559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.854686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.854723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.854857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.854888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.855067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.855099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.855308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.855340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.855456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.855487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.855594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.855625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.855817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.855849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.855962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.855994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.856126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.856156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.856398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.856429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.856690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.856722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.856827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.856857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.857042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.857073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.857315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.857347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.857525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.857556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.857797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.857831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.857963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.857994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.858108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.858139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.858308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.858340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.858512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.858544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.858818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.858851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.858966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.858998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.859237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.859268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.189 [2024-11-27 05:50:24.859466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.189 [2024-11-27 05:50:24.859498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.189 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.859679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.859712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.859827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.859857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.860032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.860063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.860185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.860217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.860406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.860437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.860613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.860644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.860752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.860781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.861042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.861073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.861190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.861221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.861459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.861489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.861697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.861730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.861982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.862013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.862131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.862161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.862334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.862364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.862537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.862569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.862754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.862786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.863049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.863090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.863223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.863255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.863523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.863553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.863757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.863788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.864048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.864080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.864205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.864236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.864432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.864463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.864638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.864682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.864947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.864979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.865104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.865135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.865349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.865379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.865481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.865513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.865782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.865814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.866020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.866150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.866182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.866287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.866318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.866537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.866569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.866754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.866788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.867032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.867063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.867238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.867269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.867439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.867470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.190 qpair failed and we were unable to recover it. 00:28:37.190 [2024-11-27 05:50:24.867740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.190 [2024-11-27 05:50:24.867772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.867877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.867908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.868180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.868211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.868384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.868416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.868546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.868576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.868865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.868897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.869170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.869201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.869464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.869495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.869667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.869716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.869889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.869921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.870184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.870216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.870404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.870435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.870615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.870646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.870918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.870950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.871128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.871159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.871342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.871372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.871559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.871591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.871776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.871808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.871998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.872029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.872143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.872180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.872368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.872399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.872582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.872612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.872808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.872841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.873035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.873066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.873319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.873350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.873479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.873510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.873699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.873731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.873920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.873951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.874135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.874166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.874405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.874436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.874611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.874642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.874910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.191 [2024-11-27 05:50:24.874941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.191 qpair failed and we were unable to recover it. 00:28:37.191 [2024-11-27 05:50:24.875115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.875146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.875277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.875308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.875414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.875445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.875643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.875684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.875941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.875972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.876100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.876130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.876371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.876403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.876521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.876553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.876742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.876775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.876875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.876907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.877169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.877201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.877338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.877369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.877486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.877517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.877651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.877701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.877885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.877917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.878204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.878235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.878429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.878460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.878641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.878680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.878814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.878845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.878966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.878998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.879177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.879208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.879469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.879500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.879685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.879718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.879839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.879872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.880059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.880090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.880261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.880292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.880415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.880446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.880641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.880686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.880877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.881100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.881133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.881274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.881305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.881498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.881528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.881708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.881741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.881916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.881951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.882070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.882102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.882279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.882310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.882431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.882462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.192 qpair failed and we were unable to recover it. 00:28:37.192 [2024-11-27 05:50:24.882633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.192 [2024-11-27 05:50:24.882664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.882860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.882892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.883133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.883164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.883333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.883364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.883554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.883585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.883706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.883740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.883985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.884189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.884220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.884464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.884496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.884666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.884703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.884944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.884976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.885105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.885137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.885265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.885296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.885414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.885446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.885575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.885607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.885780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.885814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.886059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.886091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.886435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.886506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.886696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.886766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.886975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.887012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.887288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.887320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.887509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.887540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.887683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.887716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.887895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.887926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.888193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.888224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.888446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.888477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.888618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.888649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.888849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.888881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.889054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.889086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.889326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.889357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.889604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.889644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.889775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.889806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.890015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.890046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.890177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.890208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.890448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.890479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.890604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.890635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.890839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.193 [2024-11-27 05:50:24.890873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.193 qpair failed and we were unable to recover it. 00:28:37.193 [2024-11-27 05:50:24.891085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.891300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.891331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.891594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.891626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.891761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.891794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.891899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.891930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.892143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.892175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.892344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.892376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.892588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.892619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.892816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.892848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.893034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.893066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.893261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.893293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.893508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.893540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.893822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.893854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.894116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.894146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.894258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.894289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.894549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.894580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.894693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.894724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.894919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.894949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.895118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.895150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.895340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.895370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.895557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.895599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.895714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.895747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.896017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.896049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.896338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.896369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.896515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.896546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.896813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.896845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.897031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.897062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.897261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.897291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.897475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.897506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.897714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.897745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.897975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.898160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.898191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.898305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.898336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.898454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.898485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.898665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.898710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.898831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.898862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.899103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.899134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.899269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.899301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.194 [2024-11-27 05:50:24.899533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.194 [2024-11-27 05:50:24.899564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.194 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.899739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.899771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.900013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.900044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.900216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.900246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.900417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.900448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.900640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.900680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.900900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.900931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.901103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.901134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.901244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.901275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.901455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.901492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.901614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.901645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.901771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.901801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.902039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.902071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.902203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.902234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.902404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.902435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.902682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.902715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.902987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.903019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.903230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.903261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.903555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.903587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.903784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.903816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.904008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.904039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.904213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.904243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.904492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.904523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.904712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.904743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.905003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.905033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.905140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.905169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.905407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.905436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.905631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.905662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.905924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.905955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.906077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.906107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.906380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.906409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.906608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.906638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.906768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.906800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.906974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.907004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.907179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.907210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.907391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.907421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.907594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.907630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.907864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.907896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.908080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.195 [2024-11-27 05:50:24.908110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.195 qpair failed and we were unable to recover it. 00:28:37.195 [2024-11-27 05:50:24.908351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.908381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.908559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.908590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.908806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.908838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.908967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.908998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.909236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.909266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.909446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.909477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.909651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.909701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.909906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.909937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.910125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.910156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.910287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.910318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.910502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.910532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.910709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.910741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.910917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.910949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.911127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.911158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.911274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.911304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.911426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.911456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.911575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.911606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.911789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.911821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.912061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.912092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.912263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.912293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.912414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.912445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.912614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.912644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.912839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.912869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.913112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.913142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.913328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.913364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.913483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.913514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.913705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.913737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.913906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.913937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.914197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.914227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.914407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.914437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.914634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.914664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.914801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.914831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.914955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.914986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.196 [2024-11-27 05:50:24.915108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.196 [2024-11-27 05:50:24.915139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.196 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.915418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.915449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.915651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.915690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.915862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.915893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.916154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.916185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.916360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.916390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.916567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.916598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.916846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.916877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.917137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.917168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.917301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.917331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.917465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.917496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.917684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.917717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.917942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.917972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.918232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.918263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.918511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.918542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.918742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.918773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.918942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.918973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.919170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.919200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.919475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.919506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.919618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.919649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.919854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.919886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.920071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.920102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.920293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.920324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.920447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.920478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.920745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.920777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.920961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.920992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.921109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.921140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.921269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.921299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.921565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.921595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.921717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.921749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.921921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.921952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.922143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.922172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.922475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.922506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.922721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.922755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.922881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.922912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.923090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.923120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.923269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.923299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.923538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.923568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.923716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.197 [2024-11-27 05:50:24.923747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.197 qpair failed and we were unable to recover it. 00:28:37.197 [2024-11-27 05:50:24.923949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.923979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.924188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.924218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.924339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.924370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.924548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.924579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.924765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.924797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.924927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.924958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.925083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.925114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.925299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.925330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.925430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.925460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.925649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.925689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.925872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.925903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.926167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.926198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.926305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.926335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.926569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.926599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.926901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.926933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.927100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.927131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.927308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.927340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.927475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.927506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.927624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.927655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.927861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.927899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.928030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.928061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.928177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.928208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.928330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.928361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.928550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.928580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.928696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.928727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.928914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.928945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.929208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.929238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.929547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.929578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.929751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.929782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.929995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.930026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.930169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.930200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.930396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.930428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.930637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.930668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.930871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.930902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.931030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.931060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.931256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.931287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.931484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.931514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.198 [2024-11-27 05:50:24.931699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.198 [2024-11-27 05:50:24.931731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.198 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.931903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.931934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.932107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.932138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.932268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.932299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.932428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.932459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.932650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.932690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.932877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.932908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.933048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.933080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.933344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.933375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.933642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.933689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.933825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.933856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.933975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.934005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.934185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.934217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.934457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.934488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.934757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.934789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.935053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.935083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.935335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.935366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.935468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.935497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.935608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.935637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.935888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.935920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.936023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.936052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.936306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.936337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.936597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.936628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.936759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.936790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.937042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.937072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.937258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.937289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.937460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.937491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.937691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.937724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.937904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.937934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.938117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.938147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.938341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.938371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.938599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.938629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.938893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.938924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.939030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.939060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.939185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.939216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.939404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.939435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.939620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.939650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.939856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.939887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.940015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.199 [2024-11-27 05:50:24.940046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.199 qpair failed and we were unable to recover it. 00:28:37.199 [2024-11-27 05:50:24.940190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.940221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.940461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.940491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.940598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.940627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.940766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.940796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.940976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.941007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.941109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.941138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.941396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.941427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.941628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.941658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.941846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.941877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.942079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.942110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.942307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.942338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.942630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.942661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.942813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.942845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.943020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.943050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.943242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.943273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.943488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.943519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.943693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.943727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.943914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.943945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.944159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.944190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.944427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.944457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.944597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.944629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.944887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.944919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.945101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.945132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.945247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.945276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.945396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.945428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.945543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.945572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.945744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.945776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.945950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.945983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.946199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.946229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.946488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.946519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.946639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.946676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.946802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.946832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.947006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.947037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.200 [2024-11-27 05:50:24.947284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.200 [2024-11-27 05:50:24.947315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.200 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.947450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.947480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.947597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.947626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.947875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.947908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.948092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.948124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.948311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.948347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.948515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.948545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.948692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.948725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.948915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.948946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.949122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.949153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.949347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.949378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.949555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.949587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.949793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.949825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.949953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.949983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.950244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.950274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.950480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.950510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.950647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.950703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.950917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.950948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.951133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.951165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.951349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.951380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.951623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.951654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.952107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.952144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.952347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.952381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.952652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.952695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.952820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.952851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.953113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.953144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.953416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.953447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.953617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.953647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.953865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.953896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.954012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.954042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.954269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.954463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.954494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.954621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.954659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.954815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.954853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.955051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.955082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.955269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.955301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.955593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.955623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.955813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.956107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.956137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.201 qpair failed and we were unable to recover it. 00:28:37.201 [2024-11-27 05:50:24.956390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.201 [2024-11-27 05:50:24.956420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.956526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.956556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.956837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.956869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.957060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.957091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.957271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.957302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.957550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.957580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.957713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.957744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.958015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.958045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.958230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.958261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.958448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.958478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.958660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.958723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.958842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.958873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.959005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.959035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.959318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.959349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.959469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.959499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.959713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.959746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.960008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.960038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.960176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.960206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.960334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.960364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.960495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.960525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.960764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.960802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.960911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.960941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.961149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.961179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.961351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.961382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.961558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.961589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.961847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.961878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.962066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.962097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.962233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.962265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.962456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.962486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.962605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.962636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.962782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.962813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.963024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.963054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.963292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.963323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.963523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.963553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.963824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.963857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.964053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.964083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.964207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.964238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.964418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.964447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.202 [2024-11-27 05:50:24.964659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.202 [2024-11-27 05:50:24.964703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.202 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.964941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.964972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.965089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.965119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.965358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.965388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.965569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.965599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.965706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.965739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.965916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.965946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.966191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.966220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.966397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.966428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.966604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.966634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.966795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.966827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.966936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.966965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.967067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.967097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.967367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.967398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.967661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.967699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.967938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.967969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.968139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.968170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.968383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.968413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.968622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.968652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.968904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.968934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.969136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.969167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.969364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.969394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.969631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.969661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.969892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.969924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.970057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.970088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.970301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.970332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.970585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.970615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.970841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.970876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.971130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.971160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.971347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.971377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.971492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.971522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.971650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.971694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.971896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.971926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.972034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.972064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.972231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.972262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.972366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.972397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.972659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.972701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.972926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.203 qpair failed and we were unable to recover it. 00:28:37.203 [2024-11-27 05:50:24.973137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.203 [2024-11-27 05:50:24.973168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.973349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.973493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.973524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.973652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.973692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.973886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.973916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.974121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.974151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.974260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.974290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.974400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.974431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.974605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.974634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.974769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.974800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.974905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.974936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.975194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.975223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.975433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.975470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.975587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.975618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.975755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.975786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.975979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.976010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.976160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.976190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.976361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.976391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.976494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.976524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.976700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.976733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.976897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.976927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.977189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.977219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.977387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.977418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.977620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.977650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.977864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.977895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.978007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.978038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.978173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.978203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.978339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.978370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.978615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.978646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.978789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.978820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.979063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.979092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.979336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.979366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.979476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.979507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.979702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.979736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.979908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.979938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.980064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.980094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.980299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.980329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.980601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.980631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.980880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.204 [2024-11-27 05:50:24.980911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.204 qpair failed and we were unable to recover it. 00:28:37.204 [2024-11-27 05:50:24.981149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.981185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.981389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.981420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.981662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.981702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.981941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.981972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.982088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.982118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.982300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.982329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.982457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.982487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.982689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.982721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.982824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.982854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.983041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.983072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.983253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.983284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.983551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.983582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.983823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.983854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.983996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.984026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.984224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.984254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.984498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.984529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.984791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.984822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.985020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.985051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.985227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.985257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.985369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.985399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.985523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.985553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.985718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.985887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.985917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.986088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.986119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.986296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.986326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.986458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.986489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.986697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.986728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.986920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.986951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.987095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.987125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.987256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.987287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.987406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.987436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.987645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.987686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.987884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.987914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.988049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.205 [2024-11-27 05:50:24.988079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.205 qpair failed and we were unable to recover it. 00:28:37.205 [2024-11-27 05:50:24.988361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.988391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.988532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.988561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.988809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.988841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.988979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.989010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.989186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.989216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.989350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.989381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.989565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.989595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.989867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.989936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.990191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.990258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.990488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.990523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.990658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.990705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.990880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.990911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.991039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.991071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.991334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.991366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.991497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.991528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.991791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.991822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.992021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.992052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.992318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.992349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.992626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.992657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.992795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.992826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.992999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.993040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.993276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.993306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.993497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.993528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.993767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.993800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.993991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.994021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.994192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.994223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.994436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.994467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.994592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.994623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.994763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.994795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.994979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.995011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.995202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.995233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.995353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.995384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.995496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.995528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.995645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.995686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.995911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.995942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.996191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.996222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.996490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.996520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.996642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.206 [2024-11-27 05:50:24.996682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.206 qpair failed and we were unable to recover it. 00:28:37.206 [2024-11-27 05:50:24.996893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.996923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.997111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.997142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.997430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.997461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.997594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.997624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.997807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.997839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.997978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.998009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.998133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.998163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.998284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.998315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.998488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.998518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.998741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.998810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.998976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.999012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.999205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.999236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.999507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.999539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.999655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.999697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.999816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.999846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:24.999955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:24.999986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.000095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.000127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.000395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.000427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.000693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.000726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.000895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.000927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.001038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.001069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.001261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.001292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.001550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.001590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.001720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.001752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.001872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.001904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.002087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.002118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.002313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.002345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.002608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.002640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.002760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.002792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.002923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.002955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.003169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.003199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.003328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.003359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.003536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.003566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.003667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.003712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.003907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.003937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.004110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.207 [2024-11-27 05:50:25.004140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.207 qpair failed and we were unable to recover it. 00:28:37.207 [2024-11-27 05:50:25.004390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.004421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.004636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.004667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.004925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.004956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.005167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.005198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.005384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.005415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.005690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.005723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.005984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.006015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.006208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.006239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.006479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.006510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.006696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.006729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.006914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.006945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.007122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.007153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.007446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.007477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.007641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.007720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.007943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.007980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.008173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.008206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.008330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.008362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.008661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.008705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.008894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.008926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.009159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.009191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.009393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.009425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.009611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.009641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.009777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.009812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.010035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.010213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.010244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.010418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.010449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.010642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.010704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.010880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.010911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.011102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.011133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.011370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.011402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.011665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.011706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.011946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.011977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.012243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.012275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.012461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.012492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.012690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.012724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.012895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.012927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.013132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.208 [2024-11-27 05:50:25.013164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.208 qpair failed and we were unable to recover it. 00:28:37.208 [2024-11-27 05:50:25.013426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.013456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.013704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.013736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.013926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.013958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.014228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.014259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.014549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.014580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.014774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.014807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.014985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.015016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.015135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.015167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.015361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.015393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.015572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.015604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.015777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.015809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.015943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.015974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.016228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.016260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.016524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.016555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.016737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.016771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.017012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.017043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.017225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.017262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.017477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.017508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.017630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.017660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.017851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.017881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.018121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.018153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.018394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.018425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.018549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.018580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.018847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.018880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.019075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.019106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.019302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.019334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.019517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.019548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.019680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.019712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.019836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.019867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.020051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.020084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.020357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.020389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.020598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.020630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.020861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.020893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.021087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.021117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.209 qpair failed and we were unable to recover it. 00:28:37.209 [2024-11-27 05:50:25.021329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.209 [2024-11-27 05:50:25.021359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.021499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.021531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.021708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.021739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.021865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.021896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.022067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.022098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.022213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.022245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.022369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.022400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.022642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.022682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.022858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.022890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.023081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.023112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.023242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.023272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.023469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.023501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.023621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.023652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.023907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.023938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.024143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.024173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.024415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.024446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.024691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.024724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.024990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.025021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.025278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.025310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.025496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.025528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.025699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.025732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.025930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.025961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.026227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.026263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.026379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.026411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.026533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.026564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.026701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.026733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.026921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.026951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.027079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.027111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.027231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.027263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.027375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.027407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.027681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.027713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.027888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.028157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.210 [2024-11-27 05:50:25.028189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.210 qpair failed and we were unable to recover it. 00:28:37.210 [2024-11-27 05:50:25.028433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.028464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.028726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.028759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.028997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.029028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.029214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.029245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.029429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.029460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.029593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.029625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.029809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.029842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.030028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.030059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.030244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.030276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.030464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.030496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.030708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.030740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.031026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.031057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.031230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.031261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.031454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.031484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.031599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.031630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.031847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.031879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.032020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.032051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.032187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.032218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.032335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.032367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.032537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.032568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.032692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.032724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.032914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.032945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.033118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.033149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.033329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.033359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.033615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.033646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.033776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.033808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.033994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.034025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.034220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.034253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.034380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.034411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.034583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.034619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.034751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.034784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.035024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.035056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.035338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.035369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.035547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.035578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.035761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.035794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.036036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.211 [2024-11-27 05:50:25.036068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.211 qpair failed and we were unable to recover it. 00:28:37.211 [2024-11-27 05:50:25.036239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.036271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.036468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.036500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.036691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.036724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.036858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.036888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.037004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.037034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.037205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.037237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.037438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.037469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.037691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.037724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.037928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.037959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.038075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.038106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.038238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.038269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.038550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.038581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.038778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.038812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.039022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.039053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.039167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.039198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.039467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.039498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.039688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.039721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.039967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.039999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.040116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.040147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.040260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.040291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.040475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.040506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.040698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.040731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.040852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.040884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.041133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.041164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.041468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.041500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.041776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.041808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.041932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.041963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.042096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.042128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.042249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.042280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.042465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.042497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.042737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.042770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.042954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.042985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.043203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.043234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.043427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.043463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.043688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.043720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.043970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.044002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.212 [2024-11-27 05:50:25.044188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.212 [2024-11-27 05:50:25.044220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.212 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.044347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.044378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.044561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.044592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.044780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.044813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.045003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.045034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.045245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.045276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.045517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.045549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.045739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.045771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.045945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.045977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.046155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.046187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.046430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.046461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.046708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.046740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.046910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.046942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.047209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.047240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.047360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.047391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.047633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.047664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.047857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.047889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.048080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.048111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.048350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.048381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.048513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.048545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.048736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.048768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.048951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.048982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.049154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.049185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.049358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.049389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.049633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.049663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.049855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.049886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.050013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.050044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.050149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.050180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.050366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.050398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.050584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.050615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.050819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.050852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.051033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.051064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.051305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.051335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.051462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.051493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.051702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.051734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.051933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.051965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.213 [2024-11-27 05:50:25.052214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.213 [2024-11-27 05:50:25.052245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.213 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.052366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.052403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.052695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.052727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.052902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.052933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.053150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.053181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.053370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.053402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.053666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.053705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.053851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.053881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.054133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.054163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.054338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.054370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.054611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.054642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.054882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.054915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.055111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.055142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.055404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.055434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.055543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.055574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.055772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.055805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.055992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.056022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.056233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.056264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.056438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.056469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.056585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.056615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.056811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.056842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.057022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.057054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.057290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.057322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.057505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.057536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.057665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.057706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.057895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.057925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.058110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.058141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.058332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.058364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.058633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.058663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.058888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.058921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.059163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.059194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.059394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.214 [2024-11-27 05:50:25.059424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.214 qpair failed and we were unable to recover it. 00:28:37.214 [2024-11-27 05:50:25.059634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.059664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.059869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.059901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.060088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.060119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.060229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.060260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.060375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.060405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.060528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.060559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.060747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.060779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.061066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.061102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.061205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.061237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.061408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.061446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.061619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.061650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.061915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.061946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.062182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.062213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.062350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.062379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.062611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.062643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.062906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.062938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.063061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.063092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.063280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.063310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.063424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.063454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.063632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.063662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.063915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.063947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.064128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.064158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.064370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.064401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.064680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.064713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.064980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.065011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.065138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.065169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.065349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.065381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.065589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.065620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.065734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.065767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.065941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.065973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.066103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.066134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.066321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.066352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.066539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.066570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.066762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.066795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.067060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.067091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.067208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.067240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.067417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.067448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.215 [2024-11-27 05:50:25.067631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.215 [2024-11-27 05:50:25.067663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.215 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.067854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.067887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.068125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.068156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.068338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.068368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.068551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.068582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.068689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.068727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.068856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.068887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.069087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.069118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.069366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.069398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.069595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.069626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.069899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.069930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.070126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.070156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.070440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.070760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.070968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.070999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.071107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.071138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.071328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.071359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.071557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.071588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.071762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.071794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.071911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.071943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.072116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.072146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.072283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.072313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.072522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.072553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.072745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.072777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.073020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.073051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.073173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.073204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.073450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.073481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.073752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.073784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.073960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.073990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.074177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.074209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.074338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.074368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.074552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.074584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.074757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.074791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.074996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.075028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.075147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.075177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.075417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.075449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.075645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.075688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.075907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.075938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.216 [2024-11-27 05:50:25.076070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.216 [2024-11-27 05:50:25.076101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.216 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.076373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.076406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.076598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.076629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.076755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.076788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.076907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.076938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.077128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.077159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.077333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.077365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.077559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.077590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.077839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.077872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.078045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.078076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.078266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.078297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.078530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.078561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.078802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.078835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.079020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.079051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.079192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.079230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.079521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.079552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.079691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.079723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.079901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.079932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.080120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.080151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.080340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.080372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.080631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.080662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.080866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.080898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.081082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.081114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.081368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.081399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.081596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.081626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.081816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.081849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.081963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.081994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.082176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.082208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.082454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.082487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.082654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.082694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.082817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.082849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.083076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.083108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.083275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.083305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.083568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.083599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.083782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.083815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.084002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.084032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.084169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.084200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.084391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.084422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.217 qpair failed and we were unable to recover it. 00:28:37.217 [2024-11-27 05:50:25.084609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.217 [2024-11-27 05:50:25.084638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.084758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.084787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.084966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.084996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.085244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.085273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.085524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.085555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.085678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.085709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.085900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.085930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.086046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.086076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.086337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.086368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.086486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.086517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.086650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.086697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.086938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.086967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.087204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.087234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.087478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.087509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.087631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.087661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.087892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.087924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.088119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.088156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.088337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.088368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.088558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.088589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.088768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.088806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.089047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.089078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.089203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.089233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.089406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.089436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.089605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.089635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.089846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.089878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.089987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.090017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.090136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.090167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.090357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.090388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.090496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.090528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.090661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.090701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.090968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.090999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.091261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.091292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.091474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.091505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.091688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.091720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.091954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.091985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.092106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.092135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.092377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.092409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.092623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.092654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.218 qpair failed and we were unable to recover it. 00:28:37.218 [2024-11-27 05:50:25.092848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.218 [2024-11-27 05:50:25.092878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.093054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.093086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.093268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.093300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.093562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.093594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.093768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.093800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.093981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.094013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.094152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.094182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.094300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.094332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.094434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.094466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.094638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.094685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.094880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.094912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.095102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.095133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.095301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.095331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.095596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.095627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.095826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.095858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.096122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.096153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.096410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.096441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.096628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.096659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.096850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.096887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.097144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.097175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.097412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.097444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.097549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.097578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.097754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.097787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.097916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.097947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.098158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.098189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.098385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.098416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.098629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.098661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.098841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.098872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.099063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.099095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.099381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.099412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.099614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.099645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.099862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.099894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.219 [2024-11-27 05:50:25.100033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.219 [2024-11-27 05:50:25.100064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.219 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.100200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.100230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.100355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.100386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.100564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.100595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.100781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.100813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.101031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.101062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.101272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.101304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.101436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.101470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.101606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.101636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.101838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.101870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.102067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.102098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.102335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.102367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.102471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.102500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.102634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.102664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.102794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.102823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.103000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.103031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.103151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.103181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.103414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.103444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.103575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.103607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.103845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.103877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.103981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.104010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.104248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.104280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.104398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.104429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.104569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.104600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.104816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.104848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.105113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.105144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.105261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.105296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.105469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.105500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.105757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.105790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.105976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.106006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.106194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.106225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.106408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.106439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.106646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.106686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.106905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.107078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.107109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.107297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.107327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.107512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.107543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.107794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.107826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.220 qpair failed and we were unable to recover it. 00:28:37.220 [2024-11-27 05:50:25.107965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.220 [2024-11-27 05:50:25.107997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.108208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.108239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.108428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.108460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.108635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.108666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.108861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.108892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.109064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.109096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.109279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.109310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.109552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.109582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.109704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.109736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.109929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.109961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.110134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.110164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.110345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.110376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.110619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.110651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.110870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.110902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.111017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.111049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.111281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.111351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.111547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.111583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.111771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.111804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.112068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.112099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.112316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.112346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.112530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.112562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.112743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.112777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.113040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.113072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.113315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.113347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.113537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.113567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.113693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.113725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.113940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.113971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.114146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.114176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.114363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.114403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.114578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.114610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.114820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.114852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.115026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.115057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.115294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.115325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.115519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.115549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.115823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.115855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.116040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.116070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.116309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.116340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.116512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.116543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.221 qpair failed and we were unable to recover it. 00:28:37.221 [2024-11-27 05:50:25.116781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.221 [2024-11-27 05:50:25.116811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.117044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.117075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.117388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.117419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.117654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.117698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.117934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.117965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.118211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.118243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.118456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.118486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.118667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.118710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.118888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.118919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.119139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.119169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.119358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.119389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.119643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.119683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.119873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.119903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.120119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.120151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.120390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.120421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.120548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.120579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.120763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.120796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.121072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.121142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.121429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.121465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.121739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.121776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.121967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.121998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.122264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.122296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.122562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.122594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.122718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.122751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.122950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.122981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.123099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.123131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.123398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.123430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.123623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.123654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.123779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.123811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.123999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.124031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.124170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.124218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.124324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.124356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.124491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.124522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.124726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.124758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.124891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.124922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.125188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.125220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.125436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.125468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.125656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.222 [2024-11-27 05:50:25.125699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.222 qpair failed and we were unable to recover it. 00:28:37.222 [2024-11-27 05:50:25.125892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.125923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.126173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.126206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.126458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.126489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.126682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.126716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.126918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.126949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.127214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.127245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.127452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.127484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.127702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.127736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.127945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.127977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.128182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.128214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.128351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.128382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.128584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.128616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.128801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.128833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.129066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.129098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.129278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.129309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.129550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.129582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.129754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.129792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.130032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.130063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.130202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.130233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.130438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.130474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.130664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.130705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.130979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.131010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.131193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.131224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.131417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.131449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.131577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.131609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.131797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.131829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.132070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.132102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.132294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.132325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.132510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.132541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.132753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.132786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.132968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.132999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.133118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.133148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.133335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.133367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.133597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.133667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.133970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.134006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.134192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.134224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.223 qpair failed and we were unable to recover it. 00:28:37.223 [2024-11-27 05:50:25.134421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.223 [2024-11-27 05:50:25.134452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.134642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.134685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.134874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.134906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.135037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.135067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.135182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.135213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.135398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.135429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.135666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.135708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.135890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.135921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.136139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.136170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.136435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.136466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.136612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.136644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.136773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.136805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.137022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.137053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.137233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.137264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.137501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.137531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.137717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.137750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.137944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.137974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.138165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.138196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.138310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.138341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.138547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.138577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.138699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.138732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.138904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.138935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.139103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.139134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.139349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.139386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.139574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.139605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.139787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.139819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.140060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.140090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.140323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.140355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.140542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.140574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.140835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.140866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.141054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.141085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.141226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.141257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.141452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.141482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.141597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.141628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.141829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.141861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.224 [2024-11-27 05:50:25.142029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.224 [2024-11-27 05:50:25.142067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.224 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.142184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.142215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.142456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.142487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.142684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.142717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.142888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.142919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.143087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.143118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.143290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.143320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.143490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.143520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.143717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.143749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.143939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.143969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.144192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.144223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.144411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.144442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.144697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.144730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.144922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.144953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.145200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.145231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.145443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.145474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.145783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.145815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.146056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.146087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.146332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.146363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.146571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.146601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.146720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.146752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.146960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.146991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.147178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.147209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.147393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.147424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.147621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.147652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.147840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.147872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.148108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.148140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.148331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.148361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.148532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.148568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.148809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.148841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.148973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.149004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.149128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.149159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.149347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.149377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.149548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.149579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.149701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.149733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.149907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.149937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.150106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.150136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.150308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.150339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.225 qpair failed and we were unable to recover it. 00:28:37.225 [2024-11-27 05:50:25.150521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.225 [2024-11-27 05:50:25.150552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.150665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.150706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.150887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.151159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.151190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.151392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.151424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.151615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.151645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.151850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.151882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.152126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.152156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.152338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.152368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.152610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.152640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.152897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.152928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.153121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.153151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.153263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.153294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.153560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.153591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.153724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.153756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.153951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.153982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.154221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.154253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.154431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.154462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.154679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.154711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.154900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.154932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.155123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.155154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.155339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.155370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.155589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.155620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.155800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.155831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.156064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.156095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.156367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.156398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.156580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.156611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.156753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.156795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.157046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.157078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.157224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.157256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.157518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.157554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.157729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.157761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.158023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.158054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.158246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.158276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.158542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.158572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.158700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.158732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.158907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.158938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.159128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.159159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.226 qpair failed and we were unable to recover it. 00:28:37.226 [2024-11-27 05:50:25.159442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.226 [2024-11-27 05:50:25.159473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.159765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.159797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.160014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.160044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.160164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.160195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.160450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.160481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.160664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.160705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.161004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.161036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.161296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.161327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.161536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.161566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.161753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.161786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.161900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.161932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.162050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.162080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.162322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.162352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.162551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.162582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.162717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.162749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.162951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.162982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.163093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.163125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.163294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.163324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.163438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.163470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.163660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.163701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.163986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.164017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.164256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.164287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.164480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.164511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.164688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.164719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.164893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.164924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.165054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.165085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.165211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.165241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.165368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.165399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.165503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.165534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.165805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.165837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.166015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.166046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.227 [2024-11-27 05:50:25.166275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.227 [2024-11-27 05:50:25.166306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.227 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.166497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.166540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.166668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.166722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.166963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.166994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.167113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.167145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.167332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.167363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.167549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.167580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.167794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.167827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.167942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.167972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.168152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.168184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.168446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.168476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.168649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.168687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.512 [2024-11-27 05:50:25.168815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.512 [2024-11-27 05:50:25.168846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.512 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.168951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.168981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.169199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.169230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.169452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.169483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.169692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.169724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.169940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.169970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.170208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.170239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.170473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.170504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.170764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.170796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.170980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.171011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.171247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.171279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.171515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.171546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.171823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.171854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.172036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.172067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.172192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.172223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.172409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.172439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.172632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.172664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.513 [2024-11-27 05:50:25.172778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.513 [2024-11-27 05:50:25.172810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.513 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.173052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.173083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.173339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.173370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.173572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.173604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.173797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.173829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.174021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.174052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.174164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.174195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.174295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.174325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.174589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.174620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.174753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.174784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.174979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.175010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.175137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.175168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.175352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.175388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.175511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.175542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.175720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.175752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.175938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.175969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.176108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.176139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.176311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.176342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.176545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.176575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.176765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.176798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.177007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.177038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.177168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.177198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.177301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.177332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.177523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.177554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.177761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.177793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.177914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.177945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.178135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.178166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.178368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.178399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.178659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.178719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.178833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.178864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.179033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.179064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.179252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.179283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.179524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.179555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.179770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.179802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.179989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.180019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.180201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.180232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.180434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.180684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.180717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.180835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.514 [2024-11-27 05:50:25.180866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.514 qpair failed and we were unable to recover it. 00:28:37.514 [2024-11-27 05:50:25.181004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.181036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.181155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.181185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.181430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.181461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.181631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.181662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.181844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.181875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.182114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.182144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.182392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.182422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.182620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.182650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.182859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.182891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.183010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.183040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.183284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.183315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.183533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.183565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.183704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.183736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.184002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.184039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.184300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.184330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.184523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.184554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.184686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.184718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.184834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.184865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.185047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.185077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.185263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.185294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.185468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.185500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.185597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.185628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.185818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.185850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.185962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.185993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.186196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.186226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.186397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.186428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.186642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.515 [2024-11-27 05:50:25.186690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.515 qpair failed and we were unable to recover it. 00:28:37.515 [2024-11-27 05:50:25.186959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.186990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.187104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.187134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.187251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.187282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.187459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.187490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.187657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.187700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.187902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.187933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.188032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.188064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.188333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.188364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.188615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.519 [2024-11-27 05:50:25.188646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.519 qpair failed and we were unable to recover it. 00:28:37.519 [2024-11-27 05:50:25.188918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.188949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.189139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.189170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.189341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.189372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.189504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.189535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.189803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.189835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.190078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.190109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.190380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.190411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.190544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.190575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.190776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.190807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.190997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.191028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.191267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.191298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.191432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.191463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.191583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.191614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.191806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.191838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.191965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.191996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.192102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.192132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.192307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.192337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.192594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.192630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.192819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.192850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.193026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.193057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.193239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.193270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.193451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.193482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.193666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.193708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.193825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.193856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.193970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.194001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.194202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.194233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.194350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.194382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.194504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.194535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.194653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.194713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.194833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.194864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.194988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.195019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.195131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.195163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.195330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.195361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.195625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.195656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.195792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.195824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.196044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.196216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.196247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.196368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.196399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.196662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.196705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.196896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.196927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.197103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.197134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.197378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.197410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.197632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.197663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.197861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.197892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.198114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.198146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.198412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.198442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.520 [2024-11-27 05:50:25.198689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.520 [2024-11-27 05:50:25.198722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.520 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.198861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.198892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.199028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.199059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.199208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.199238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.199358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.199389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.199645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.199685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.199952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.199983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.200169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.200200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.200373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.200404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.200576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.200606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.200870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.200902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.201153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.201190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.201373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.201404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.201531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.201562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.201695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.201727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.201963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.201995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.202164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.202194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.202453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.202484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.202697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.202730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.202917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.202948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.203078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.203108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.203281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.203312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.203555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.203586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.203795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.203827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.204007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.204038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.204225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.204256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.204440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.204470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.204696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.204729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.204846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.204877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.204997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.205028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.521 [2024-11-27 05:50:25.205230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.521 [2024-11-27 05:50:25.205262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.521 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.205520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.205551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.205688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.205720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.205895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.205926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.206188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.206219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.206475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.206505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.206693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.206725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.206897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.206928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.207241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.207311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.207572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.207608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.207826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.207861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.207985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.208016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.208190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.208222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.208418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.208448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.208637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.208676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.208896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.208928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.209106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.209137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.209427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.209458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.209574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.209606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.209874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.209909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.210121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.210153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.210417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.210457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.210642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.210694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.210811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.210841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.210974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.211005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.211144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.211176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.211346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.211376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.211490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.211522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.211635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.211666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.211921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.211951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.212204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.212235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.212369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.212400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.212581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.212612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.212801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.213119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.213151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.213340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.213372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.213552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.213583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.213698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.213731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.213867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.213898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.214055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.214087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.214276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.522 [2024-11-27 05:50:25.214307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.522 qpair failed and we were unable to recover it. 00:28:37.522 [2024-11-27 05:50:25.214488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.214519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.214702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.214734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.214947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.214977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.215156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.215187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.215307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.215338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.215616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.215648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.215873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.215906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.216146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.216217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.216483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.216519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.216652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.216701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.216890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.216921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.217107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.217137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.217330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.217361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.217557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.217588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.217788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.217821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.218008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.218040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.218172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.218203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.218330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.218362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.218600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.218631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.218830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.218862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.219041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.219072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.219373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.219405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.219538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.219568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.219690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.219722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.219914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.219944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.220056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.220087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.220224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.220255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.220442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.220474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.220662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.220703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.220823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.220855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.220955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.220987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.221203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.221234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.221472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.221503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.221693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.221726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.221909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.221946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.222205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.222236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.222355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.222386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.222573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.222604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.222871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.222904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.223166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.223197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.223387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.223614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.223645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.223896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.223928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.224112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.523 [2024-11-27 05:50:25.224143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.523 qpair failed and we were unable to recover it. 00:28:37.523 [2024-11-27 05:50:25.224315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.224346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.224484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.224515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.224728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.224760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.224880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.224911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.225103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.225134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.225310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.225341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.225601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.225631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.225755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.225787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.226030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.226061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.226231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.226262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.226452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.226483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.226609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.226640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.226941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.226973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.227077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.227108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.227278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.227309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.227490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.227520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.227693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.227725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.227905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.227942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.228070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.228101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.228288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.228319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.228504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.228535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.228772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.228804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.228985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.229016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.229278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.229309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.229562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.229593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.229782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.229814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.229999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.230031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.230292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.230323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.230458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.230490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.230663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.230702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.230873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.230904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.231051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.231082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.231198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.231228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.231347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.231378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.231573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.231605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.231788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.231821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.232031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.232063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.232181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.232211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.232447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.232478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.232592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.232623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.232839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.232871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.233003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.233034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.233149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.233179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.524 qpair failed and we were unable to recover it. 00:28:37.524 [2024-11-27 05:50:25.233358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.524 [2024-11-27 05:50:25.233389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.233568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.233604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.233861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.233893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.234063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.234095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.234269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.234299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.234419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.234450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.234643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.234691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.234960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.234990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.235117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.235147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.235327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.235358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.235534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.235564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.235776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.235808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.235999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.236029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.236144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.236174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.236352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.236383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.236648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.236689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.236876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.236906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.237086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.237117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.237287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.237318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.237523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.237554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.237655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.237698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.237881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.237911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.238079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.238110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.238328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.238359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.238642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.238679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.238919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.238951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.239141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.239171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.239362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.239393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.239580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.239610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.239749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.239781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.240049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.240079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.240252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.525 [2024-11-27 05:50:25.240282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.525 qpair failed and we were unable to recover it. 00:28:37.525 [2024-11-27 05:50:25.240477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.240509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.240642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.240680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.240871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.240901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.241106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.241137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.241309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.241339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.241521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.241551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.241741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.241773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.241894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.241925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.242224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.242357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.242635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.242667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.242799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.242830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.243023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.243054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.243225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.243255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.243423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.243454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.243708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.243739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.243927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.243958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.244146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.244177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.244460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.244491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.244612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.244642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.244909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.244940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.245148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.245179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.245383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.245693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.245726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.245921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.245952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.246096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.246126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.246387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.246418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.526 [2024-11-27 05:50:25.246635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.526 [2024-11-27 05:50:25.246666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.526 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.246857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.246888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.247075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.247106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.247273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.247304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.247589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.247619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.247798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.247829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.247954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.247985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.248278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.248308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.248493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.248524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.248647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.248685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.248818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.248855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.249096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.249128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.249309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.249340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.249545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.249576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.249751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.249784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.249888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.249920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.250102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.250132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.250240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.250272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.250525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.250557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.250841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.250872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.251132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.251163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.251375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.251406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.251537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.251568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.251683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.251715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.251835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.251866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.251978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.252009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.252127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.252158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.252264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.252295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.252488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.252519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.252755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.252786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.253067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.253098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.253286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.253317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.253499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.253530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.253747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.253779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.253896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.253927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.254193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.254224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.254411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.254442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.254564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.254601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.254773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.254807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.255050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.255080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.255351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.255522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.255552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.527 [2024-11-27 05:50:25.255825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.527 [2024-11-27 05:50:25.255856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.527 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.255986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.256017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.256288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.256319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.256497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.256529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.256722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.256754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.256992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.257024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.257220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.257250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.257367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.257398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.257587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.257619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.257844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.257876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.258042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.258073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.258181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.258211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.258394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.258425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.258619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.258649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.258832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.258863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.259048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.259079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.259198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.259228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.259473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.259504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.259792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.259825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.260014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.260045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.260288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.260319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.260423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.260454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.260634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.260665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.260916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.260948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.261119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.261150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.261388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.261419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.261596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.261626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.261806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.261838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.262010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.262040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.262178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.262209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.262335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.262366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.262590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.262621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.262863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.262894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.263014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.263045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.263168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.263199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.263382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.263414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.263657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.263751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.263972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.264007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.264145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.264177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.264368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.264400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.264505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.264536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.264652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.264697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.264937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.264969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.265217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.265249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.265369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.265400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.265637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.265668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.265891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.265922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.266129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.266159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.266447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-27 05:50:25.266478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.528 qpair failed and we were unable to recover it. 00:28:37.528 [2024-11-27 05:50:25.266649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.266700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.266831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.266862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.267073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.267103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.267361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.267391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.267664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.267706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.267954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.267985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.268175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.268206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.268387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.268418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.268613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.268644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.268834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.268866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.269052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.269083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.269274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.269305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.269486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.269517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.269706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-27 05:50:25.269739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.529 qpair failed and we were unable to recover it. 00:28:37.529 [2024-11-27 05:50:25.269860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.269892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.270133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.270164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.270428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.270459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.270583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.270613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.270802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.270834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.271008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.271039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.271219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.271250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.271488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.271520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.271700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.271732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.271905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.271937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.530 qpair failed and we were unable to recover it. 00:28:37.530 [2024-11-27 05:50:25.272126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-27 05:50:25.272157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.272282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.272313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.272553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.272585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.272765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.272797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.273038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.273068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.273263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.273294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.273476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.273508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.273621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.273652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.273922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.273954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.274140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.274171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.274352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.274383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.274558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.274590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.531 [2024-11-27 05:50:25.274796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.531 [2024-11-27 05:50:25.274829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.531 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.274961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.274992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.275174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.275205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.275308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.275340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.275438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.275475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.275715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.275749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.275873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.275904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.276101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.276132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.276263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.276294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.276483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.276515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.276723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.276755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.276931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.276962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.277072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.277103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.277368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.277399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.277529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.277560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.277849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.532 [2024-11-27 05:50:25.277881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.532 qpair failed and we were unable to recover it. 00:28:37.532 [2024-11-27 05:50:25.278065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.278097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.278285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.278317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.278445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.278476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.278680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.278712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.278899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.278931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.279106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.279137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.279354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.279386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.279508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.279539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.279733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.279766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.279870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.279902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.280130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.280161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.280282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.280313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.280507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.280538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.280717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.280749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.280870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.280902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.281086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.281156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.281423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.281459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.281681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.281717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.533 [2024-11-27 05:50:25.281913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.533 [2024-11-27 05:50:25.281945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.533 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.282119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.282150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.282344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.282375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.282548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.282578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.282698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.282731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.282928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.282960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.283153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.283184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.283306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.283336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.283577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.283609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.283808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.283840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.284054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.284095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.284245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.284276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.284425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.284455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.284645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.284686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.284865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.284896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.285035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.285066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.285262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.285294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.285485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.285516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.285704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.285737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.285928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.285959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.286138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.286170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.286411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.286442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.286712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.286746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.286862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.286893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.287082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.287114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.287294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.287325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.287454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.287485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.287741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.287773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.287956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.287987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.288160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.288190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.288368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.288400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.288601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.288631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.288820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.288852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.289028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.289060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.289237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.289267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.289451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.534 [2024-11-27 05:50:25.289483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.534 qpair failed and we were unable to recover it. 00:28:37.534 [2024-11-27 05:50:25.289681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.289714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.289826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.289862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.290123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.290154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.290273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.290304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.290488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.290519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.290719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.290752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.290944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.290975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.291172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.291204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.291376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.291407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.291523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.291553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.291816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.291849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.292058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.292089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.292359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.292392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.292528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.292559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.292736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.292769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.292896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.292928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.293100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.293132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.293249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.293280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.293464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.293496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.293687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.293720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.293907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.293938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.294138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.294168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.294339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.294371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.294485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.294516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.294704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.294737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.294917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.294948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.295151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.295182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.295316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.295347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.295545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.295577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.295797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.295829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.296033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.296064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.296255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.296286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.296468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.296499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.296736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.296767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.296954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.296985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.297174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.297205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.297378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.297408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.297580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.297611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.297828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.297860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.298048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.298079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.298189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.298220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.535 [2024-11-27 05:50:25.298414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.535 [2024-11-27 05:50:25.298450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.535 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.298632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.298663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.298860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.298892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.299096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.299128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.299262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.299293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.299477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.299508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.299747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.299780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.299974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.300005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.300121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.300152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.300338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.300370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.300638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.300678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.300857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.300888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.301095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.301126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.301366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.301398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.301647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.301685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.301877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.301908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.302087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.302118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.302305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.302336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.302522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.302553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.302664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.302704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.302883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.302914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.303084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.303115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.303328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.303359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.303486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.303517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.303640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.303690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.303956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.303987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.304239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.304271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.304423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.304455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.304645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.304687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.304930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.304961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.305197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.305229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.305498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.305529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.305770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.305803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.305939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.305970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.306233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.306264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.306396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.306426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.306540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.306571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.306783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.306815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.306984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.307015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.307207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.307238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.307380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.307423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.536 qpair failed and we were unable to recover it. 00:28:37.536 [2024-11-27 05:50:25.307619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.536 [2024-11-27 05:50:25.307651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.307842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.307979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.308010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.308121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.308152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.308323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.308355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.308614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.308645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.308915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.308947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.309132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.309162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.309289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.309321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.309451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.309482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.309660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.309701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.309982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.310013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.310189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.310221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.310418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.310449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.310650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.310703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.310881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.310913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.311180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.311210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.311392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.311424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.311596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.311627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.311818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.311850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.311966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.311997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.312242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.312272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.312425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.312712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.312745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.312871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.312901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.313142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.313173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.313350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.313381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.313495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.313526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.313725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.313757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.313972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.314002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.314192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.314223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.314347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.314377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.537 [2024-11-27 05:50:25.314500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.537 [2024-11-27 05:50:25.314531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.537 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.314722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.314754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.314922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.314953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.315149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.315180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.315384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.315415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.315707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.315739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.315926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.315957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.316147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.316184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.316448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.316479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.316745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.316778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.316914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.316945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.317170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.317201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.317441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.317473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.317728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.317760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.317933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.317964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.318212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.318243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.318488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.318519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.318712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.318744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.318992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.319024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.319152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.319183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.319367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.319398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.319537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.319568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.319741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.319773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.319969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.319999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.320170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.320201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.320334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.320365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.538 qpair failed and we were unable to recover it. 00:28:37.538 [2024-11-27 05:50:25.320537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.538 [2024-11-27 05:50:25.320568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.320763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.320796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.320978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.321009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.321200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.321230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.321467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.321499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.321764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.321796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.321982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.322013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.322219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.322251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.322455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.322487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.322689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.322721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.322928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.322959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.323141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.323172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.323418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.323448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.323711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.323744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.323941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.323972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.324104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.324136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.324381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.539 [2024-11-27 05:50:25.324413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.539 qpair failed and we were unable to recover it. 00:28:37.539 [2024-11-27 05:50:25.324585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.324617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.324817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.324849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.325025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.325056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.325262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.325293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.325464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.325501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.325750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.325782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.325886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.325917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.326156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.326187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.326379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.326409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.326593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.326624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.326822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.326853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.327114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.327145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.327321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.327352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.327524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.327556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.327714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.327747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.327858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.327890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.328008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.328039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.328276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.328307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.328554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.328586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.328768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.328799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.328928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.328959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.329225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.329257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.329441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.329473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.329592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.329623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.329825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.329856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.540 [2024-11-27 05:50:25.330065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.540 [2024-11-27 05:50:25.330097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.540 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.330309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.330340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.330455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.330486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.330700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.330734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.330907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.330938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.331109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.331139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.331366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.331398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.331533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.331564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.331747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.331780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.332023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.332054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.332223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.332255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.332435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.332466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.332601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.332633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.332830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.332862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.333042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.333073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.333252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.333283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.333542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.333574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.333766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.333798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.333903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.333934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.334150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.334187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.334316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.334347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.334482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.334512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.334697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.334730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.334839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.334870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.334985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.335015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.335139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.335168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.335433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.335464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.335577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.541 [2024-11-27 05:50:25.335606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.541 qpair failed and we were unable to recover it. 00:28:37.541 [2024-11-27 05:50:25.335863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.335896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.336069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.336098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.336267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.336299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.336481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.336512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.336704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.336737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.336871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.336901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.337098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.337129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.337387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.337418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.337690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.337723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.337848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.337878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.338002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.338032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.338150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.338181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.338361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.338390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.338653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.338696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.338814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.338842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.339025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.339056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.339233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.339262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.339453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.339485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.339705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.339738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.339922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.339954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.340088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.340117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.340288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.340319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.340422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.340453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.340574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.340605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.340789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.340821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.341008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.341040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.341229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.341260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.341391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.341423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.341536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.341567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.341689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.341721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.341902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.341935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.342173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.342210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.542 qpair failed and we were unable to recover it. 00:28:37.542 [2024-11-27 05:50:25.342477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.542 [2024-11-27 05:50:25.342509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.342702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.342733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.342939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.342972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.343094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.343123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.343262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.343291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.343491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.343521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.343635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.343665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.343851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.343881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.344135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.344166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.344361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.344392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.344496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.344527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.344759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.344792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.344962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.344993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.345189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.345221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.345394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.345425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.345592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.345622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.345897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.345929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.346040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.346071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.346337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.346367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.346555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.346586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.346834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.346866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.543 [2024-11-27 05:50:25.346999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.543 [2024-11-27 05:50:25.347030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.543 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.347135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.347164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.347286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.347315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.347580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.347610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.347736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.347767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.348010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.348082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.348362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.348397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.348522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.348554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.348742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.348777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.348968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.349000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.349265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.349296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.349488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.349520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.349650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.349697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.349890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.349921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.350095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.544 [2024-11-27 05:50:25.350127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.544 qpair failed and we were unable to recover it. 00:28:37.544 [2024-11-27 05:50:25.350373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.350404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.350590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.350621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.350828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.350860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.351030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.351062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.351242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.351273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.351455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.351486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.351662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.351705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.351916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.351947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.352075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.352106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.545 qpair failed and we were unable to recover it. 00:28:37.545 [2024-11-27 05:50:25.352295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.545 [2024-11-27 05:50:25.352326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.352592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.352622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.352747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.352780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.352951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.352981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.353176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.353208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.353365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.353547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.353578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.353752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.353784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.354004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.354039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.354218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.354249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.354490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.354522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.354738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.354770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.354989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.355021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.355193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.355224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.355409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.355439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.355683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.355717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.355904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.355936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.356172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.356204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.356348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.356380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.356494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.356524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.356769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.356801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.357062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.357103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.357206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.357236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.357498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.357529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.357649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.357686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.357877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.357909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.358079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.358109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.358369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.358401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.358583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.358614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.358863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.358895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.359079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.359110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.359300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.359331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.359520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.359551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.359727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.359760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.360017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.360049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.360270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.360301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.546 qpair failed and we were unable to recover it. 00:28:37.546 [2024-11-27 05:50:25.360480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.546 [2024-11-27 05:50:25.360511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.360617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.360647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.360903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.360934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.361109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.361140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.361320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.361351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.361535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.361566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.361692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.361723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.361913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.361943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.362130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.362161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.362292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.362321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.362504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.362535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.362641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.362677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.362863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.362896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.363160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.363192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.363303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.363335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.363451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.363483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.363748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.363780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.363959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.363991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.364118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.364150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.364397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.364429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.364615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.364647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.364920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.364952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.365130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.365161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.365297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.365329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.365471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.365502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.365683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.365716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.365905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.365936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.366060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.366089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.366258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.366289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.366463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.366495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.366690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.366723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.366988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.367020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.367186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.367217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.367475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.367506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.367648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.367697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.367827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.367859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.368047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.368078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.368343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.368375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.368566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.368598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.368797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.368830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.369090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.369121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.369314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.369347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.369530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.369561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.369739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.369771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.369942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.547 [2024-11-27 05:50:25.369973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.547 qpair failed and we were unable to recover it. 00:28:37.547 [2024-11-27 05:50:25.370234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.370265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.370394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.370425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.370619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.370808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.370843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.371026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.371057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.371227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.371257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.371393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.371423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.371595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.371632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.371881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.371912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.372096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.372128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.372261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.372292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.372464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.372496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.372683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.372727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.372910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.372941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.373122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.373153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.373287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.373319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.373506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.373537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.373680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.373713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.373924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.373955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.374192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.374223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.374406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.374436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.374613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.374644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.374838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.374869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.375018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.375050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.375221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.375252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.375440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.375472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.375660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.375719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.375921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.375952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.376132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.376163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.376295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.376326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.376591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.376622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.376757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.376790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.376961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.376992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.377196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.377227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.377355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.377385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.377570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.377601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.377730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.377763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.377957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.377989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.378239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.378270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.378478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.378509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.378625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.378656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.378803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.378835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.379009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.379039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.379157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.379186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.548 [2024-11-27 05:50:25.379428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.548 [2024-11-27 05:50:25.379458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.548 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.379580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.379610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.379726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.379759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.379927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.379965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.380150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.380181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.380343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.380608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.380639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.380889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.380921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.381034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.381065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.381196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.381227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.381403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.381434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.381612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.381643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.381838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.381870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.381981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.382013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.382202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.382233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.382401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.382432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.382618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.382650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.382864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.382896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.383110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.383141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.383267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.383297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.383561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.383592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.383768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.383801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.383991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.384023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.384210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.384241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.384425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.384457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.384707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.384740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.384928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.384960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.385150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.385181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.385301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.385332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.385533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.385565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.385765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.385797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.385975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.386006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.386182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.386213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.386443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.386474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.386609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.386640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.386779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.386810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.386926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.386957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.387180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.387211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.387332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.387362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.387493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.387524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.549 [2024-11-27 05:50:25.387764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.549 [2024-11-27 05:50:25.387796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.549 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.387979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.388010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.388133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.388164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.388356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.388394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.388580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.388611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.388802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.388835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.389023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.389054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.389341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.389373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.389638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.389677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.389865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.389897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.390152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.390183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.390446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.390477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.390678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.390710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.390839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.390869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.391135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.391166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.391433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.391464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.391684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.391717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.391909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.391941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.392200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.392231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.392362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.392393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.392663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.550 [2024-11-27 05:50:25.392705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.550 qpair failed and we were unable to recover it. 00:28:37.550 [2024-11-27 05:50:25.392916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.392947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.393056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.393087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.393194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.393226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.393466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.393497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.393629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.393661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.393858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.393889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.394060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.394091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.394273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.394304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.394499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.394529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.394678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.394711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.394917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.394949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.395075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.395106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.395276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.395307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.395591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.395623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.395815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.395848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.396032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.396063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.396235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.396266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.396442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.396473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.396731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.396765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.396954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.396987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.397243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.397274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.397488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.397519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.397772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.397809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.397994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.398025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.398268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.398300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.398490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.398522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.398695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.398726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.398915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.398946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.399119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.399150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.399332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.399364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.399496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.399526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.399640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.399678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.399949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.399981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.551 [2024-11-27 05:50:25.400110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.551 [2024-11-27 05:50:25.400140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.551 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.400330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.400361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.400550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.400581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.400875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.400908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.401106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.401138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.401340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.401371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.401608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.401639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.401845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.401878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.402059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.402090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.402265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.402296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.402471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.402502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.402640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.402678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.402814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.402845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.402975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.403006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.403179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.403210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.403333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.403363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.403580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.403611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.403738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.403769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.403979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.404010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.404271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.404302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.404430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.552 [2024-11-27 05:50:25.404461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.552 qpair failed and we were unable to recover it. 00:28:37.552 [2024-11-27 05:50:25.404586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.404618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.404825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.404856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.405031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.405062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.405203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.405234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.405422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.405453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.405652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.405693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.405879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.405909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.406098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.406129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.406368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.406404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.406586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.406617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.406796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.406828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.407014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.407045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.407215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.553 [2024-11-27 05:50:25.407245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.553 qpair failed and we were unable to recover it. 00:28:37.553 [2024-11-27 05:50:25.407360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.407390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.407585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.407615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.407933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.407966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.408164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.408195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.408457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.408487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.408708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.408741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.408926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.408958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.409132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.409162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.409413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.409444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.409569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.409601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.409842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.409875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.410076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.410107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.410379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.410411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.410532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.554 [2024-11-27 05:50:25.410563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.554 qpair failed and we were unable to recover it. 00:28:37.554 [2024-11-27 05:50:25.410751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.410785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.410926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.410958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.411124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.411155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.411331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.411362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.411549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.411580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.411685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.411717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.411892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.411923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.412025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.412056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.412222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.412344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.412374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.412639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.412689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.412937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.412969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.413206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.413238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.555 [2024-11-27 05:50:25.413430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.555 [2024-11-27 05:50:25.413461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.555 qpair failed and we were unable to recover it. 00:28:37.556 [2024-11-27 05:50:25.413609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.556 [2024-11-27 05:50:25.413640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.556 qpair failed and we were unable to recover it. 00:28:37.556 [2024-11-27 05:50:25.413827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.556 [2024-11-27 05:50:25.413859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.556 qpair failed and we were unable to recover it. 00:28:37.556 [2024-11-27 05:50:25.414098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.556 [2024-11-27 05:50:25.414130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.556 qpair failed and we were unable to recover it. 00:28:37.556 [2024-11-27 05:50:25.414317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.556 [2024-11-27 05:50:25.414348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.556 qpair failed and we were unable to recover it. 00:28:37.556 [2024-11-27 05:50:25.414520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.556 [2024-11-27 05:50:25.414551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.556 qpair failed and we were unable to recover it. 00:28:37.556 [2024-11-27 05:50:25.414735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.556 [2024-11-27 05:50:25.414767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.556 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.414935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.414966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.415078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.415115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.415291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.415321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.415494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.415525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.415705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.415738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.415939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.415971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.416155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.416186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.416425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.416457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.416707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.416743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.416949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.416981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.557 qpair failed and we were unable to recover it. 00:28:37.557 [2024-11-27 05:50:25.417172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.557 [2024-11-27 05:50:25.417204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.558 qpair failed and we were unable to recover it. 00:28:37.558 [2024-11-27 05:50:25.417398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.558 [2024-11-27 05:50:25.417429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.558 qpair failed and we were unable to recover it. 00:28:37.558 [2024-11-27 05:50:25.417620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.558 [2024-11-27 05:50:25.417651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.558 qpair failed and we were unable to recover it. 00:28:37.558 [2024-11-27 05:50:25.417787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.558 [2024-11-27 05:50:25.417819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.558 qpair failed and we were unable to recover it. 00:28:37.558 [2024-11-27 05:50:25.417956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.558 [2024-11-27 05:50:25.417986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.558 qpair failed and we were unable to recover it. 00:28:37.558 [2024-11-27 05:50:25.418217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.418249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.418515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.418546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.418660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.418700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.418885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.418916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.419082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.419113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.419235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.419266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.419457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.419488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.419623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.419655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.419788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.419819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.559 [2024-11-27 05:50:25.420060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.559 [2024-11-27 05:50:25.420092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.559 qpair failed and we were unable to recover it. 00:28:37.560 [2024-11-27 05:50:25.420199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.560 [2024-11-27 05:50:25.420230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.560 qpair failed and we were unable to recover it. 00:28:37.560 [2024-11-27 05:50:25.420474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.560 [2024-11-27 05:50:25.420505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.560 qpair failed and we were unable to recover it. 00:28:37.560 [2024-11-27 05:50:25.420710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.560 [2024-11-27 05:50:25.420742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.560 qpair failed and we were unable to recover it. 00:28:37.560 [2024-11-27 05:50:25.420874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.560 [2024-11-27 05:50:25.420906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.560 qpair failed and we were unable to recover it. 00:28:37.560 [2024-11-27 05:50:25.421086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.560 [2024-11-27 05:50:25.421118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.560 qpair failed and we were unable to recover it. 00:28:37.560 [2024-11-27 05:50:25.421222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.560 [2024-11-27 05:50:25.421253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.560 qpair failed and we were unable to recover it. 00:28:37.561 [2024-11-27 05:50:25.421463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.561 [2024-11-27 05:50:25.421495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.561 qpair failed and we were unable to recover it. 00:28:37.561 [2024-11-27 05:50:25.421694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.561 [2024-11-27 05:50:25.421726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.561 qpair failed and we were unable to recover it. 00:28:37.561 [2024-11-27 05:50:25.421917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.561 [2024-11-27 05:50:25.421948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.561 qpair failed and we were unable to recover it. 00:28:37.561 [2024-11-27 05:50:25.422138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.561 [2024-11-27 05:50:25.422169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.561 qpair failed and we were unable to recover it. 00:28:37.561 [2024-11-27 05:50:25.422338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.422370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.422486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.422517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.422708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.422740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.422944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.422976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.423152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.423182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.423351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.423383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.423516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.423554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.423860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.423891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.424065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.424097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.424282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.424312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.562 qpair failed and we were unable to recover it. 00:28:37.562 [2024-11-27 05:50:25.424551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.562 [2024-11-27 05:50:25.424582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.563 qpair failed and we were unable to recover it. 00:28:37.563 [2024-11-27 05:50:25.424772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.563 [2024-11-27 05:50:25.424803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.563 qpair failed and we were unable to recover it. 00:28:37.563 [2024-11-27 05:50:25.425047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.563 [2024-11-27 05:50:25.425080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.563 qpair failed and we were unable to recover it. 00:28:37.563 [2024-11-27 05:50:25.425348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.563 [2024-11-27 05:50:25.425380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.563 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.425576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.425608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.425789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.425821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.425947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.425978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.426220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.426252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.426450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.426621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.426653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.426856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.426888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.427064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.427095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.427276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.427307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.427494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.427525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.427664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.427705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.427824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.427855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.428046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.428077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.428252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.428283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.428403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.428436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.428550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.428582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.428775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.428809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.429012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.429044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.429270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.564 [2024-11-27 05:50:25.429301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.564 qpair failed and we were unable to recover it. 00:28:37.564 [2024-11-27 05:50:25.429453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.429487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.429795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.429827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.429944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.429976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.430123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.430155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.430288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.430320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.430491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.430524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.430732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.430764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.430880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.430912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.431183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.431214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.431350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.431383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.431639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.431676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.431891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.431922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.432035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.432067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.432252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.432289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.432425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.432457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.432651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.432694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.432818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.432848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.433058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.433090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.433365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.433396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.433584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.433616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.433761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.433794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.433972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.434004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.434193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.434225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.434367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.434397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.434510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.434541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.434820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.434853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.434957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.434988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.435165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.435197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.435421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.435453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.435695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.435727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.435942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.435973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.436105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.436137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.436261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.436293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.436537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.436568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.436741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.436779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.436906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.436938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.437122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.437153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.437355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.437387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.437501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.437532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.437653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.437692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.437821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.437853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.438097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.438129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.438259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.438290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.438464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.438495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.438703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.438736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.438863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.438895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.439064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.439095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.439287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.439318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.439529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.439561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.439682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.565 [2024-11-27 05:50:25.439905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.565 [2024-11-27 05:50:25.439936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.565 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.440112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.440144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.440330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.440363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.440574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.440611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.440794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.440826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.440966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.440997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.441164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.441196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.441381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.441412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.441615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.441647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.441833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.441865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.441994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.442024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.442204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.442236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.442353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.442384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.442626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.442657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.442884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.442916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.443094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.443125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.443246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.443277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.443464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.443496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.443617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.443648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.443847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.443878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.443992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.444024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.444130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.444162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.444273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.444303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.444474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.444505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.444691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.444724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.444911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.444943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.445064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.445095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.445229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.445261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.445366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.445396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.445583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.445615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.445805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.445839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.446082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.446114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.446384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.446417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.446531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.446563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.446745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.446777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.446988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.447020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.447126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.447157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.447362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.447394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.447515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.447546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.447677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.447709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.447898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.447930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.448048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.448079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.448262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.448293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.448483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.448520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.448711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.448743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.448867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.448898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.449012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.449043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.449220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.449252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.449439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.449470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.449663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.449704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.449953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.449985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.450109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.450140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.450326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.450541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.450572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.450757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.450789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.450981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.451013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.451188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.451220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.451426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.451458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.451570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.451601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.451772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.451804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.451998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.452029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.452267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.452299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.452489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.452521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.452698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.452734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.453014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.453046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.453286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.453318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.453426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.453458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.453653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.453691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.566 qpair failed and we were unable to recover it. 00:28:37.566 [2024-11-27 05:50:25.453864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.566 [2024-11-27 05:50:25.453895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.454095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.454127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.454258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.454289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.454479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.454509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.454747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.454779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.454893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.454925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.455032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.455063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.455248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.455279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.455543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.455575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.455756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.455789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.455964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.455995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.456132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.456164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.456270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.456301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.456487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.456518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.456649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.456698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.456829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.456866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.457131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.457163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.457367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.457398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.457636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.457668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.457809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.457840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.457962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.457994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.458183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.458215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.458336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.458367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.458477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.458508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.458701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.458733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.458920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.458952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.459212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.459243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.459454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.459485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.459679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.459711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.459905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.459937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.460174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.460205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.460323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.460355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.460470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.460500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.460690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.460721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.460914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.460945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.461059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.461091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.461278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.461309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.461414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.461446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.461617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.461648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.461831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.461862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.462032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.462065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.462309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.462340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.462588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.462620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.462760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.462793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.463052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.463082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.463268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.463299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.463484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.463516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.463766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.463799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.463922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.463954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.464121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.464152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.464334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.464366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.464628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.464659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.464860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.567 [2024-11-27 05:50:25.464891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.567 qpair failed and we were unable to recover it. 00:28:37.567 [2024-11-27 05:50:25.465073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.465105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.465298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.465329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.465554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.465590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.465725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.465758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.465860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.465892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.466155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.466186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.466298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.466331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.466506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.466538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.466710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.466741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.466868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.466900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.467089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.467121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.467242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.467273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.467460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.467492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.467667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.467708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.467825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.467856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.467988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.468019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.468134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.468165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.468333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.468364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.468532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.468563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.468691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.468723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.468930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.468962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.469152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.469184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.469362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.469394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.469518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.469550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.469720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.469753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.470033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.470066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.470269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.470300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.470480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.470512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.470720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.470752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.470862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.470894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.470998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.471029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.471167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.471198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.471311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.471343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.471519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.471551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.471744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.471776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.471979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.472011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.472114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.472144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.472264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.472296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.472478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.472509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.472633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.472665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.472860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.472892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.473021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.473052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.473176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.473208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.473336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.473367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.473549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.473580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.473815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.473848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.474042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.474073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.474182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.474213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.474392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.474424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.474549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.474580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.474753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.474800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.475001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.475032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.475272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.475303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.475489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.475520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.475705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.475739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.475979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.476010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.476187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.476218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.476334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.476366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.476462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.476493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.476627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.476658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.476864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.476896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.477124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.477155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.568 [2024-11-27 05:50:25.477396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.568 [2024-11-27 05:50:25.477427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.568 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.477638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.477678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.477859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.477891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.478025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.478057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.478194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.478225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.478435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.478467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.478604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.478636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.478842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.478881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.479000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.479032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.479166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.479197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.479377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.479409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.479523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.479555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.479690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.479723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.479972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.480004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.480193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.480225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.480400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.480432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.480683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.480716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.480888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.480920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.481125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.481157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.481350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.481381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.481493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.481524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.481703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.481736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.481908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.481940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.482070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.482102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.482379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.482412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.482524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.482554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.482740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.482772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.482956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.482989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.483229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.483261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.483370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.483402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.483584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.483615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.483815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.483848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.483956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.483987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.484240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.484271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.484401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.484433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.484551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.484582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.484777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.484810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.484926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.484958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.485245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.485276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.485395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.485427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.485649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.485692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.485880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.485911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.486032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.486062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.486235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.486267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.486521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.486552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.486750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.486783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.486903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.486935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.487048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.487262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.487295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.487478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.487510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.487655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.487694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.487869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.487902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.488077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.488108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.488229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.488261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.488377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.488408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.488517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.488549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.488720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.488752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.569 [2024-11-27 05:50:25.488951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.569 [2024-11-27 05:50:25.488983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.569 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.489124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.489156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.489333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.489368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.489605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.489638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.489760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.489791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.489918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.489951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.490083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.490115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.490373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.490406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.490612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.490644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.490797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.851 [2024-11-27 05:50:25.490830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.851 qpair failed and we were unable to recover it. 00:28:37.851 [2024-11-27 05:50:25.491018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.491049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.491243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.491275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.491577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.491707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.491741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.491862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.491895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.492069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.492101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.492293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.492324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.492514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.492546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.492733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.492766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.492960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.492992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.493112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.493143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.493349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.493381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.493515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.493548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.493659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.493700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.493880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.493911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.494083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.494115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.494243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.494275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.494460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.494492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.494609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.494642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.494770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.494802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.494983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.495021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.495261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.495292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.495474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.495642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.495682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.495945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.495976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.496160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.496192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.496380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.496411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.496668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.496711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.496834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.496866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.496986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.497017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.497272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.497303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.497437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.497470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.497600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.497631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.497745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.497777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.497987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.498019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.498192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.498224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.498350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.498381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.852 [2024-11-27 05:50:25.498506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.852 [2024-11-27 05:50:25.498537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.852 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.498667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.498709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.498890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.498923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.499101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.499132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.499250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.499282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.499390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.499422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.499605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.499638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.499820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.499890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.500027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.500063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.500298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.500330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.500482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.500516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.500636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.500668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.500856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.500888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.501005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.501036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.501217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.501249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.501434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.501465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.501582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.501613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.501745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.501779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.501884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.501914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.502025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.502056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.502162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.502193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.502312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.502342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.502515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.502546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.502662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.502713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.502827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.502857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.502975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.503006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.503192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.503223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.503337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.503369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.503610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.503640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.503800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.504086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.504124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.504268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.504301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.504427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.504649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.504695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.504890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.504921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.505051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.505083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.505297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.505328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.505531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.505563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.853 [2024-11-27 05:50:25.505691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.853 [2024-11-27 05:50:25.505724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.853 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.505913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.505945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.506057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.506088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.506279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.506310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.506434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.506465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.506637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.506679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.506806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.506837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.506950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.506982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.507092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.507123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.507307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.507338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.507464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.507496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.507705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.507738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.507872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.507903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.508089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.508119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.508226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.508258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.508393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.508425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.508598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.508630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.508904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.508937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.509070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.509101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.509203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.509235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.509355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.509387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.509577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.509609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.509748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.509781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.509958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.509990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.510093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.510124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.510233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.510270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.510460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.510490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.510659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.510701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.510892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.510925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.511098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.511130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.511304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.511335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.511512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.511544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.511748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.511781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.511889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.511921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.512030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.512063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.512179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.512210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.512451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.512482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.512607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.512641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.854 [2024-11-27 05:50:25.512776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.854 [2024-11-27 05:50:25.512813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.854 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.513018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.513051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.513176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.513209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.513325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.513359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.513482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.513514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.513781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.513814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.513920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.513951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.514069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.514100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.514239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.514272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.514459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.514491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.514683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.514716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.514898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.514931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.515036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.515070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.515244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.515277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.515457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.515488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.515704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.515735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.515909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.515941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.516061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.516093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.516273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.516304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.516428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.516459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.516574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.516606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.516720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.516752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.516932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.516964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.517071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.517103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.517276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.517308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.517414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.517445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.517632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.517663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.517792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.517830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.518012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.518043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.518236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.518267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.518460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.518492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.518665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.518710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.518829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.518861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.518975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.519006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.519193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.519224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.519350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.519380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.519650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.519693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.519877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.519908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.855 [2024-11-27 05:50:25.520016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.855 [2024-11-27 05:50:25.520047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.855 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.520220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.520251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.520387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.520420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.520696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.520730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.520926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.520957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.521131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.521163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.521272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.521305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.521423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.521454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.521571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.521604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.521781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.521815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.521987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.522017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.522139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.522170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.522374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.522405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.522591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.522623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.522737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.522769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.522975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.523006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.523199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.523230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.523435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.523467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.523706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.523738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.523980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.524012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.524135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.524167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.524288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.524319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.524495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.524527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.524645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.524687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.524812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.524844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.524958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.524989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.525162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.525193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.525309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.525339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.525459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.525490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.525593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.525630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.525881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.525912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.526150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.526181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.856 qpair failed and we were unable to recover it. 00:28:37.856 [2024-11-27 05:50:25.526351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.856 [2024-11-27 05:50:25.526383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.526572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.526603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.526710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.526741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.526844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.526876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.527052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.527083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.527187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.527219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.527409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.527440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.527688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.527720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.527913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.527945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.528119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.528150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.528258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.528289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.528554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.528586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.528758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.528791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.528909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.528941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.529055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.529087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.529215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.529247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.529371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.529402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.529592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.529629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.529759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.529793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.529966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.529997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.530113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.530145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.530276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.530308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.530448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.530480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.530683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.530717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.530840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.530872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.530985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.531017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.531124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.531155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.531355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.531387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.531569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.531601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.531738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.531771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.531950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.531982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.532100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.532131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.532238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.532269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.532452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.532483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.532656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.532698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.532818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.532849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.532988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.533019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.533204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.857 [2024-11-27 05:50:25.533241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.857 qpair failed and we were unable to recover it. 00:28:37.857 [2024-11-27 05:50:25.533380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.533411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.533516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.533547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.533729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.533761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.533890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.533922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.534044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.534075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.534246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.534278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.534386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.534418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.534659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.534701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.534818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.534849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.535026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.535057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.535251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.535282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.535417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.535448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.535746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.535876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.535908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.536174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.536206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.536309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.536340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.536473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.536505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.536689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.536722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.536866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.536897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.537069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.537101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.537231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.537262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.537435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.537467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.537651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.537691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.537827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.537859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.538035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.538067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.538194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.538224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.538466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.538498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.538601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.538632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.538826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.538858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.539075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.539107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.539279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.539310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.539480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.539512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.539640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.539702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.539842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.539873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.539988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.540018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.540154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.540186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.540300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.540332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.540570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.540602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.858 qpair failed and we were unable to recover it. 00:28:37.858 [2024-11-27 05:50:25.540798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.858 [2024-11-27 05:50:25.540830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.541067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.541105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.541308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.541339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.541451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.541483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.541682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.541714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.541857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.541889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.542067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.542099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.542368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.542400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.542664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.542703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.542912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.542943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.543058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.543089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.543196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.543228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.543350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.543381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.543551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.543582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.543707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.543740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.543930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.543961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.544156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.544188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.544369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.544400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.544582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.544614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.544832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.544865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.544982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.545013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.545142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.545174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.545299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.545330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.545525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.545556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.545681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.545722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.545971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.546003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.546197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.546228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.546348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.546380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.546510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.546542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.546720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.546752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.546945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.546976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.547180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.547212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.547358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.547390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.547629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.547661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.547788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.547820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.547951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.547981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.548163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.548195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.548316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.859 [2024-11-27 05:50:25.548347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.859 qpair failed and we were unable to recover it. 00:28:37.859 [2024-11-27 05:50:25.548475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.548506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.548694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.548727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.548909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.548941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.549072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.549109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.549222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.549254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.549438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.549470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.549574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.549606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.549732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.549764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.549882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.549914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.550025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.550057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.550231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.550262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.550375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.550407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.550578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.550610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.550805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.550837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.551024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.551055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.551161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.551192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.551364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.551395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.551528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.551560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.551681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.551713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.551815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.551846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.551961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.551993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.552114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.552145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.552397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.552429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.552601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.552632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.552760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.552792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.552975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.553006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.553199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.553231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.553340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.553370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.553481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.553514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.553777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.553809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.553917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.553949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.554135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.554167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.554290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.554321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.554449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.860 [2024-11-27 05:50:25.554480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.860 qpair failed and we were unable to recover it. 00:28:37.860 [2024-11-27 05:50:25.554604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.554636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.554752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.554786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.554974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.555005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.555128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.555157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.555335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.555367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.555473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.555504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.555710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.555743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.555930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.555961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.556089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.556121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.556322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.556359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.556486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.556521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.556642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.556683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.556867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.556899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.557072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.557103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.557272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.557304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.557422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.557453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.557562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.557593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.557780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.557813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.558019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.558051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.558175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.558206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.558398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.558430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.558605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.558635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.558760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.558792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.558937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.558970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.559091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.559122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.559248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.559279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.559409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.559442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.559562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.559593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.559773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.559806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.559928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.559960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.560153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.560184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.560384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.560416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.560590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.560621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.560770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.560806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.560955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.560986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.561115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.561146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.561332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.561364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.561634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.861 [2024-11-27 05:50:25.561665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.861 qpair failed and we were unable to recover it. 00:28:37.861 [2024-11-27 05:50:25.561818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.561849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.562055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.562088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.562335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.562367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.562538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.562568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.562774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.562809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.563008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.563041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.563256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.563288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.563502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.563534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.563767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.563801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.564004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.564034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.564222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.564253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.564438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.564476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.564789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.564820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.564950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.564982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.565117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.565148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.565272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.565304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.565566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.565597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.565837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.565870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.566057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.566089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.566328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.566359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.566620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.566652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.566856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.566888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.567002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.567032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.567153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.567185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.567325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.567357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.567479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.567511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.567715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.567750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.567881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.567912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.568054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.568086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.568218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.568250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.568433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.568464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.568580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.568612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.568810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.568844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.569115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.569148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.569433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.569465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.569734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.569767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.569982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.570015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.570200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.862 [2024-11-27 05:50:25.570232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.862 qpair failed and we were unable to recover it. 00:28:37.862 [2024-11-27 05:50:25.570416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.570449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.570588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.570620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.570888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.570921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.571178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.571210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.571421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.571452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.571580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.571611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.571874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.571906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.572092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.572123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.572261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.572293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.572484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.572515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.572767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.572801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.572937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.572969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.573156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.573187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.573378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.573410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.573649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.573691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.573903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.573935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.574122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.574152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.574294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.574325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.574464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.574495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.574796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.574828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.574963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.574995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.575233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.575265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.575454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.575486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.575685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.575718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.575912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.575944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.576120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.576151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.576286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.576317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.576520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.576552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.576686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.576719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.576855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.576887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.577012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.577043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.577330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.577361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.577571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.577603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.577795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.577828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.578016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.578047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.578185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.578216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.578414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.578445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.863 [2024-11-27 05:50:25.578633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.863 [2024-11-27 05:50:25.578664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.863 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.578862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.578893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.579076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.579107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.579233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.579270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.579462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.579493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.579714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.579746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.579943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.579975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.580168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.580199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.580402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.580433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.580711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.580745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.580996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.581028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.581266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.581298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.581479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.581511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.581694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.581727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.581970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.582003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.582291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.582324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.582615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.582647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.582919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.582952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.583195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.583226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.583480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.583511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.583714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.583747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.583998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.584031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.584215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.584247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.584479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.584702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.584734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.584922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.584953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.585145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.585177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.585291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.585322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.585510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.585541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.585722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.585753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.585944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.585976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.586258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.586290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.586553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.586585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.586787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.586820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.587060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.587093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.587333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.587365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.587603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.587635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.587817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.587849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.864 [2024-11-27 05:50:25.588115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.864 [2024-11-27 05:50:25.588146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.864 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.588435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.588467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.588637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.588667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.588812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.588842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.588986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.589016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.589210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.589246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.589434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.589465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.589598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.589628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.589940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.589974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.590093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.590123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.590302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.590334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.590526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.590558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.590757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.590790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.590975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.591008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.591131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.591163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.591383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.591415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.591667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.591710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.591966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.591997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.592179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.592211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.592456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.592487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.592708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.592740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.592944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.592975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.593223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.593254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.593492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.593524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.593822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.593855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.593987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.594018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.594203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.594235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.594460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.594492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.594697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.594730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.594910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.594943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.595226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.595257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.595441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.595472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.595747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.595781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.595974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.865 [2024-11-27 05:50:25.596005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.865 qpair failed and we were unable to recover it. 00:28:37.865 [2024-11-27 05:50:25.596296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.596328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.596566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.596598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.596719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.596750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.596888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.596918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.597084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.597115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.597259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.597290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.597475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.597506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.597754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.597787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.597974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.598006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.598197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.598228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.598431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.598463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.598648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.598696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.598873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.598904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.599118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.599150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.599274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.599304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.599636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.599668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.599878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.599910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.600094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.600127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.600266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.600300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.600504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.600536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.600753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.600788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.600968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.601000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.601132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.601163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.601288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.601320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.601539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.601571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.601768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.601800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.601938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.601969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.602098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.602128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.602337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.602368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.602703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.602736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.602880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.602910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.603152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.603184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.603423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.603456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.603651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.603696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.603924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.603954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.604098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.604129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.604422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.604454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.604667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.866 [2024-11-27 05:50:25.604710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.866 qpair failed and we were unable to recover it. 00:28:37.866 [2024-11-27 05:50:25.604923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.604955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.605082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.605114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.605261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.605292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.605531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.605562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.605770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.605805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.605940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.605973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.606166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.606197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.606398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.606430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.606557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.606587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.606874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.606907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.607048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.607082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.607406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.607438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.607647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.607707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.607849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.607887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.608158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.608190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.608414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.608446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.608588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.608620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.608772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.608805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.608949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.608980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.609175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.609208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.609486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.609519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.609658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.609701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.609892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.609923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.610116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.610148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.610393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.610425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.610668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.610716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.610893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.610924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.611123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.611155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.611264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.611295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.611436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.611466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.611715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.611749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.611952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.611985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.612174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.612205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.612348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.612380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.612621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.612653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.612867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.612900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.613143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.613177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.613461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.867 [2024-11-27 05:50:25.613492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.867 qpair failed and we were unable to recover it. 00:28:37.867 [2024-11-27 05:50:25.613665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.613710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.613850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.613882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.614066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.614097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.614303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.614335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.614519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.614550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.614789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.614822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.614950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.614982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.615140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.615172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.615416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.615447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.615655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.615700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.615877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.615909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.616036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.616067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.616204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.616237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.616424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.616456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.616649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.616708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.616896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.616933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.617076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.617108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.617307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.617339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.617534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.617567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.617746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.617778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.617980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.618012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.618198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.618229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.618430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.618463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.618714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.618747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.618890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.618921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.619115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.619148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.619427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.619459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.619664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.619707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.619815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.619845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.619993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.620024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.620216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.620248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.620504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.620536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.620791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.620823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.621011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.621043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.621189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.621220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.621556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.621588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.621817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.621850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.621973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.868 [2024-11-27 05:50:25.622004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.868 qpair failed and we were unable to recover it. 00:28:37.868 [2024-11-27 05:50:25.622190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.622223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.622442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.622473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.622729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.622762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.622893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.622925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.623073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.623105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.623386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.623418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.623634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.623666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.623865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.623896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.624029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.624060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.624274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.624307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.624434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.624465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.624583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.624615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.624811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.624844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.625099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.625148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.625363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.625394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.625588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.625619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.625760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.625792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.625982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.626020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.626160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.626191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.626389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.626421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.626539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.626571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.626693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.626726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.626921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.626952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.627131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.627164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.627300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.627331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.627516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.627549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.627703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.627736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.627919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.627953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.628146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.628178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.628302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.628334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.628452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.628484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.628687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.628721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.628838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.628870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.629044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.629076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.629367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.629400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.869 [2024-11-27 05:50:25.629575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.869 [2024-11-27 05:50:25.629606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.869 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.629792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.629825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.629941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.629972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.630094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.630125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.630260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.630292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.630496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.630528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.630731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.630765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.630890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.630923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.631107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.631139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.631255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.631288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.631462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.631494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.631609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.631641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.631798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.631831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.632019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.632051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.632190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.632223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.632422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.632455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.632631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.632664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.632861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.632891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.632997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.633028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.633199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.633231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.633413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.633445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.633569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.633601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.633846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.633884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.634106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.634139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.634330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.634362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.634489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.634520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.634796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.634829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.634949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.634980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.635245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.635278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.635482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.635514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.635643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.635697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.635843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.635875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.636121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.636153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.636341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.636374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.636482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.636514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.870 qpair failed and we were unable to recover it. 00:28:37.870 [2024-11-27 05:50:25.636645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.870 [2024-11-27 05:50:25.636685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.636906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.636938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.637057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.637088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.637308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.637339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.637531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.637562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.637741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.637774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.638029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.638062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.638164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.638196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.638394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.638426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.638622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.638654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.638799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.638830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.639019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.639052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.639230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.639262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.639402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.639434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.639599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.639686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.639910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.639946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.640198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.640230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.640409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.640440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.640564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.640597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.640780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.640813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.640935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.640966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.641175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.641207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.641317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.641347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.641458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.641489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.641600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.641631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.641759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.641792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.641908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.641939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.642141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.642182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.642307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.642338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.642462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.642493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.642610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.642641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.642824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.642856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.643049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.643081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.643191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.643221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.643336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.643367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.643544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.643575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.643753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.643784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.643958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.871 [2024-11-27 05:50:25.643989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.871 qpair failed and we were unable to recover it. 00:28:37.871 [2024-11-27 05:50:25.644101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.644133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.644315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.644345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.644526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.644557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.644689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.644724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.644865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.644896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.645017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.645047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.645220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.645251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.645471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.645502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.645684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.645717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.645929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.645960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.646149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.646180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.646322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.646353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.646458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.646490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.646611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.646642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.646769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.646801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.646980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.647012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.647179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.647252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.647536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.647572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.647693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.647727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.647915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.647948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.648125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.648158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.648280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.648311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.648419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.648450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.648575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.648606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.648806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.648839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.649123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.649155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.649333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.649365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.649472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.649503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.649661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.649707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.649821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.649861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.650061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.650092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.650293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.650325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.650509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.650539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.650714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.650748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.650962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.650994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.651182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.651213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.651406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.651439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.651572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.872 [2024-11-27 05:50:25.651603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.872 qpair failed and we were unable to recover it. 00:28:37.872 [2024-11-27 05:50:25.651796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.651829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.651952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.651983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.652226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.652259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.652500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.652532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.652682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.652716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.652834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.652866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.653044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.653075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.653200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.653231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.653412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.653443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.653566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.653597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.653708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.653743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.653861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.653893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.654018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.654049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.654300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.654334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.654455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.654486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.654738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.654772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.654886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.654917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.655040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.655072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.655186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.655222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.655397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.655428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.655558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.655589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.655778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.655810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.655915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.655946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.656131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.656163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.656275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.656307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.656419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.656449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.656557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.656587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.656701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.656734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.656929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.656960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.657082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.657113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.657227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.657259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.657385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.657422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.657540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.657571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.657776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.657808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.658017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.658048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.658201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.658234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.658360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.658391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.658615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.873 [2024-11-27 05:50:25.658646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.873 qpair failed and we were unable to recover it. 00:28:37.873 [2024-11-27 05:50:25.658782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.658814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.658933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.658964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.659239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.659270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.659565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.659596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.659775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.659807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.660000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.660031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.660265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.660297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.660440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.660473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.660665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.660705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.660892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.660923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.661052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.661082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.661408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.661440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.661625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.661657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.661800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.661831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.662026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.662057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.662239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.662270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.662389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.662420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.662623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.662654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.662834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.662866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.663102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.663133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.663461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.663498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.663772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.663806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.663944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.663976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.664097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.664128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.664437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.664469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.664657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.664707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.664834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.664866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.665057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.665088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.665210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.665241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.665453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.665485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.665610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.665641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.665865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.665898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.666158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.874 [2024-11-27 05:50:25.666190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.874 qpair failed and we were unable to recover it. 00:28:37.874 [2024-11-27 05:50:25.666457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.666494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.666742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.666776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.666915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.666947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.667082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.667114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.667411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.667443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.667730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.667763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.667954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.667987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.668117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.668149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.668379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.668410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.668595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.668627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.668862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.668895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.669090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.669122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.669433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.669466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.669645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.669687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.669841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.669873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.670048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.670079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.670264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.670295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.670568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.670598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.670778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.670811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.671004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.671036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.671217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.671248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.671444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.671475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.671662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.671704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.671885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.671915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.672109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.672140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.672406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.672437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.672715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.672749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.672878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.672913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.673142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.673173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.673376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.673407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.673649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.673689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.673937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.673968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.674087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.674117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.674315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.674345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.674528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.674559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.674743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.674775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.675052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.675083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.675267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.875 [2024-11-27 05:50:25.675299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.875 qpair failed and we were unable to recover it. 00:28:37.875 [2024-11-27 05:50:25.675470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.675501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.675729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.675761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.675879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.675917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.676057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.676088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.676294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.676326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.676531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.676562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.676709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.676742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.676871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.676903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.677013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.677044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.677221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.677253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.677386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.677416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.677545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.677576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.677758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.677791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.678037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.678070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.678273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.678304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.678517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.678548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.678747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.678780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.678971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.679002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.679131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.679162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.679363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.679394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.679613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.679643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.679797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.679829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.680037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.680068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.680317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.680348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.680530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.680561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.680802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.680834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.681034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.681066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.681263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.681294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.681417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.681448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.681642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.681681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.681923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.681954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.682084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.682114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.682311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.682342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.682551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.682582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.682702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.682753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.682939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.682969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.683144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.683175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.683382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.876 [2024-11-27 05:50:25.683413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.876 qpair failed and we were unable to recover it. 00:28:37.876 [2024-11-27 05:50:25.683683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.683716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.683837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.683870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.684070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.684101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.684306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.684338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.684604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.684636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.684900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.684931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.685060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.685092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.685234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.685265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.685482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.685513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.685701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.685734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.685875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.685906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.686051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.686084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.686346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.686377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.686635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.686667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.686810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.686841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.687013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.687044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.687187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.687218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.687415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.687445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.687712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.687744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.687862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.687893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.688164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.688195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.688445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.688476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.688652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.688692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.688879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.688910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.689042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.689073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.689217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.689248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.689376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.689406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.689724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.689757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.689990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.690022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.690162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.690194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.690327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.690358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.690628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.690665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.690827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.690858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.691049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.691079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.691270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.691301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.691508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.691539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.691733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.691765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.691961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.877 [2024-11-27 05:50:25.691992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.877 qpair failed and we were unable to recover it. 00:28:37.877 [2024-11-27 05:50:25.692100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.692131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.692278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.692308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.692546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.692577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.692870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.692901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.693021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.693052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.693250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.693281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.693458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.693496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.693716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.693748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.694015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.694046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.694192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.694223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.694344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.694375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.694554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.694585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.694757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.694790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.694909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.694940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.695125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.695156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.695399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.695430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.695614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.695645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.695792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.695824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.695966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.695998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.696182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.696213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.696494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.696525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.696805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.696837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.696963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.696994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.697196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.697226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.697429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.697462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.697645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.697686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.697883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.697913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.698151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.698182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.698418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.698449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.698631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.698828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.698859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.699077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.699108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.699278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.699309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.699521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.699559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.699803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.699836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.700092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.700123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.700275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.700305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.700490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.878 [2024-11-27 05:50:25.700521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.878 qpair failed and we were unable to recover it. 00:28:37.878 [2024-11-27 05:50:25.700697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.700729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.700921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.700952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.701092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.701122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.701396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.701427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.701725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.701757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.701893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.701925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.702165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.702197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.702387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.702418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.702708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.702741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.702920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.702953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.703145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.703176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.703472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.703504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.703743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.703775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.703968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.704000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.704267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.704297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.704478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.704509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.704649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.704691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.704935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.704966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.705208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.705239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.705377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.705408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.705541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.705571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.705787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.705820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.706092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.706125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.706244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.706275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.706515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.706547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.706720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.706752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.706899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.706931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.707055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.707086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.707335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.707367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.707545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.707575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.707828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.707860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.708101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.708133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.708267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.708298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.879 [2024-11-27 05:50:25.708465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.879 [2024-11-27 05:50:25.708496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.879 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.708795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.708829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.709040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.709077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.709272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.709303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.709430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.709461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.709654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.709695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.709879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.709910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.710183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.710215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.710417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.710448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.710763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.710796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.710921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.710952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.711071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.711103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.711298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.711329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.711615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.711645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.711783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.711815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.711953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.711984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.712182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.712213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.712438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.712469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.712740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.712772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.712966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.712997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.713254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.713286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.713460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.713491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.713722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.713755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.713948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.713979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.714180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.714211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.714480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.714512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.714711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.714743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.714947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.714979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.715178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.715209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.715399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.715431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.715693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.715725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.715961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.715991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.716233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.716264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.716400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.716432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.716606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.716637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.716797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.716831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.716963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.716994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.717184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.717216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.880 [2024-11-27 05:50:25.717483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.880 [2024-11-27 05:50:25.717514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.880 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.717794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.717827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.718156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.718188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.718460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.718490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.718690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.718728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.718859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.718891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.719186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.719217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.719445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.719476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.719614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.719646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.719879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.719911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.720090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.720121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.720304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.720335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.720507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.720538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.720814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.720846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.720981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.721012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.721207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.721239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.721443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.721475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.721757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.721789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.722065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.722096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.722383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.722415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.722523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.722554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.722815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.722847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.723063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.723095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.723268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.723299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.723566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.723598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.723840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.723872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.724073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.724105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.724375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.724406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.724648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.724758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.725073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.725105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.725248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.725278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.725472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.725503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.725743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.725776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.725968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.726000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.726294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.726325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.726544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.726576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.726841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.726874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.727078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.881 [2024-11-27 05:50:25.727109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.881 qpair failed and we were unable to recover it. 00:28:37.881 [2024-11-27 05:50:25.727351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.727383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.727560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.727591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.727775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.727807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.728046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.728077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.728388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.728419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.728657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.728698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.728960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.728997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.729269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.729300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.729477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.729509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.729751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.729783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.729901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.729933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.730123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.730155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.730286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.730319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.730496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.730527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.730720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.730751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.730994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.731026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.731223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.731253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.731525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.731556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.731752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.731784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.731972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.732003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.732257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.732288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.732465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.732497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.732690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.732723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.732851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.732882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.733068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.733099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.733298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.733330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.733591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.733622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.733760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.733792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.733967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.733998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.734260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.734291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.734512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.734543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.734650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.734690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.734911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.734942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.735136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.735167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.735357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.735388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.735655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.735697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.735838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.735869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.882 [2024-11-27 05:50:25.736006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.882 [2024-11-27 05:50:25.736037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.882 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.736299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.736330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.736458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.736489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.736756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.736788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.736986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.737018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.737303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.737334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.737461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.737492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.737705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.737737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.737930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.737961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.738150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.738186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.738495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.738526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.738781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.738813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.739011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.739043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.739285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.739316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.739533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.739565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.739835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.739868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.740084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.740115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.740240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.740272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.740534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.740565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.740703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.740736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.740923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.740955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.741224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.741255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.741512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.741544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.741768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.741800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.742024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.742055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.742208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.742239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.742453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.742483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.742675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.742707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.742899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.742930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.743127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.743158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.743411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.743442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.743567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.743598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.743930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.743962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.883 [2024-11-27 05:50:25.744160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.883 [2024-11-27 05:50:25.744191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.883 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.744471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.744502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.744801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.744835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.745039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.745071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.745264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.745295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.745577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.745757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.745789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.745987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.746019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.746213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.746245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.746523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.746555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.746691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.746723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.746919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.746951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.747153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.747185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.747406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.747438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.747611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.747642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.747845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.747876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.748071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.748108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.748305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.748336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.748511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.748542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.748770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.748804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.748940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.748971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.749242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.749274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.749451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.749482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.749667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.749706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.749915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.749946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.750197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.750228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.750453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.750484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.750681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.750713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.750911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.750944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.751073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.751104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.751243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.751275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.751586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.751618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.751761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.751793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.751909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.751941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.752136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.752168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.752473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.752503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.752788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.752821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.753067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.753100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.884 [2024-11-27 05:50:25.753338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.884 [2024-11-27 05:50:25.753370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.884 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.753638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.753679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.753879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.753910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.754027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.754058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.754174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.754205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.754500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.754531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.754736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.754768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.755018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.755050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.755191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.755222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.755443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.755474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.755676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.755709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.755862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.755893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.756187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.756217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.756425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.756456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.756643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.756683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.756859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.756891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.757085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.757118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.757370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.757401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.757682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.757720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.757899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.757930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.758153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.758184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.758388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.758419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.758694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.758726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.758949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.758980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.759104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.759135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.759361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.759393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.759528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.759558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.759811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.759844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.759981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.760013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.760142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.760173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.760366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.760587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.760618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.760836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.760869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.761065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.761097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.761355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.761387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.761655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.761694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.761887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.761918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.762108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.762138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.885 [2024-11-27 05:50:25.762269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.885 [2024-11-27 05:50:25.762300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.885 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.762496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.762527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.762642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.762680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.762822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.762852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.762999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.763031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.763144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.763174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.763410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.763441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.763646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.763687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.763865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.763896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.764073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.764103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.764418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.764450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.764714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.764747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.764949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.764980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.765103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.765134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.765401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.765431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.765647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.765684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.765823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.765854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.766104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.766135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.766265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.766295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.766488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.766520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.766767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.766805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.767068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.767100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.767316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.767348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.767547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.767578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.767770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.767803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.768001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.768032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.768166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.768197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.768427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.768459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.768722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.768755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.768971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.769003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.769155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.769187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.769393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.769425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.769713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.769745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.769892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.769924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.770122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.770154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.770416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.770448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.770722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.770754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.770908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.770940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.886 [2024-11-27 05:50:25.771086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.886 [2024-11-27 05:50:25.771118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.886 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.771321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.771354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.771573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.771604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.771847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.771880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.772155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.772186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.772462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.772493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.772692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.772725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.772907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.772938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.773123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.773154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.773297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.773328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.773510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.773542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.773805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.773837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.774109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.774140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.774480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.774512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.774709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.774741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.774894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.774926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.775072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.775103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.775232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.775264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.775535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.775566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.775851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.775883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.776015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.776046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.776311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.776342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.776526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.776563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.776784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.776816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.776938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.776970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.777175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.777206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.777360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.777391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.777522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.777554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.777729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.777762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.777890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.777921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.778128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.778160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.778479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.778510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.778787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.778819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.778957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.778988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.779185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.779216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.779522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.779553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.779756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.779921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.779952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.887 [2024-11-27 05:50:25.780119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.887 [2024-11-27 05:50:25.780151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.887 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.780304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.780335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.780614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.780645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.780806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.780838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.780968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.780999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.781200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.781231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.781365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.781397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.781584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.781616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.781830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.781861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.782134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.782167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.782419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.782451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.782653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.782694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.782828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.782859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.783085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.783117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.783317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.783348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.783620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.783651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.783843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.783875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.784058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.784088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.784347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.784379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.784518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.784549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.784745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.784778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.784933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.784965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.785116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.785147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.785297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.785328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.785590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.785627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.785854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.785886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.786120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.786152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.786371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.786402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.786606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.786637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.786824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.786856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.787106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.787136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.787338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.787370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.787666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.787706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.787908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.888 [2024-11-27 05:50:25.787939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.888 qpair failed and we were unable to recover it. 00:28:37.888 [2024-11-27 05:50:25.788165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.788196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.788417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.788449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.788743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.788775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.789032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.789064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.789295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.789327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.789510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.789542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.789682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.789715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.789995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.790027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.790167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.790198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.790402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.790434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.790620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.790651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.790916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.790947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.791156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.791187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.791417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.791449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.791668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.791708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.791903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.791935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.792069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.792100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.792301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.792334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.792538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.792570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.792831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.792865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.793067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.793099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.793383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.793413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.793697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.793730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.793962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.793993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.794198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.794229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.794519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.794551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.794754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.794785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.794984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.795016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.795154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.795185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.795541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.795573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.795770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.795809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.796124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.796155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.796443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.796474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.796616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.796647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.796854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.796885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.797074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.797105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.797403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.797435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.797725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.889 [2024-11-27 05:50:25.797758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.889 qpair failed and we were unable to recover it. 00:28:37.889 [2024-11-27 05:50:25.797967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.797999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.798252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.798283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.798583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.798615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.798842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.798875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.799023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.799055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.799260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.799291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.799491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.799524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.799713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.799745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.799962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.799994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.800195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.800227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.800524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.800556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.800827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.800860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.801154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.801186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.801400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.801432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.801631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.801663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.801867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.801899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.802160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.802192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.802540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.802571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.802805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.802839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.803040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.803073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.803198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.803229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.803511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.803543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.803856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.803888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.804039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.804072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.804361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.804393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.804577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.804608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.804822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.804855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.805077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.805110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.805314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.805345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.805625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.805657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.805864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.805895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.806085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.806117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.806280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.806311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.806572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.806603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.806881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.806914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.807172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.807205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.807476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.807508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.807692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.890 [2024-11-27 05:50:25.807725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.890 qpair failed and we were unable to recover it. 00:28:37.890 [2024-11-27 05:50:25.807916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.807947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.808176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.808208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.808415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.808447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.808575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.808606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.808864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.808897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.809152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.809186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.809418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.809450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.809659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.809701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.809918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.809950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.810098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.810130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.810337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.810369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.810570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.810602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.810774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.810807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.811028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.811060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.811259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.811291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.811598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.811630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.811789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.811821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.812018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.812050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.812232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.812262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.812520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.812553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.812756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.812788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.812937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.812974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.813206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.813237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.813534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.813566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.813837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.813870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.814131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.814163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.814358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.814389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.814597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.814629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.814924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.814956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.815253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.815285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.815482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.815515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.815818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.815850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.816065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.816097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.816244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.816276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.816551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.816583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.816868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.816901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.817123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.817156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.817354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.891 [2024-11-27 05:50:25.817385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.891 qpair failed and we were unable to recover it. 00:28:37.891 [2024-11-27 05:50:25.817533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.817565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.817760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.817794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.818023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.818054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.818197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.818228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.818477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.818509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.818716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.818748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.819027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.819060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.819189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.819220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.819491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.819523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.819806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.819838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.820066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.820098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.820320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.820352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.820563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.820595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.820815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.820849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.821057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.821089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.821286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.821318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.821572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.821604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.821887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.821920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.822116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.822148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.822347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.822379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.822653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.822710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.822860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.822892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.823103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.823135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.823420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.823457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.823660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.823705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.823902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.823933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.824138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.824170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.824459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.824491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.824743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.824776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.824958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.824989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.825241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.825273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.825454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.825486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.825706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.825740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.892 [2024-11-27 05:50:25.825969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.892 [2024-11-27 05:50:25.826002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.892 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.826132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.826165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.826400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.826431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.826649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.826687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.826888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.826919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.827115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.827146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.827467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.827498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.827694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.827725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.827951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.827984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.828178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.828211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.828509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.828541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.828786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.828819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.829016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.829048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.829275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.829306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.829647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.829687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.829888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.829920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.830171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.830202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.830415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.830447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.830639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.830677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:37.893 [2024-11-27 05:50:25.830888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.893 [2024-11-27 05:50:25.830920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:37.893 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.831139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.831172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.831434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.831467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.831691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.831725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.832004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.832035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.832290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.832322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.832586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.832617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.832831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.832865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.833000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.833032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.833158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.833189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.833491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.833523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.833727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.833766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.833973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.834005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.834141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.834173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.834400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.834432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.834732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.834765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.834948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.834979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.835175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.835207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.835425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.835458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.835662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.835703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.835847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.835880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.836021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.836052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.836315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.836347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.836709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.173 [2024-11-27 05:50:25.836741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.173 qpair failed and we were unable to recover it. 00:28:38.173 [2024-11-27 05:50:25.837059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.837091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.837324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.837356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.837538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.837569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.837718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.837750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.837935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.837964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.838156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.838187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.838428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.838460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.838603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.838633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.838794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.838826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.839043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.839075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.839396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.839429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.839728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.839761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.839956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.839987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.840118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.840150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.840430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.840462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.840741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.840773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.840905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.840934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.841069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.841099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.841309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.841338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.841534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.841565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.841746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.841777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.842034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.842066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.842273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.842304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.842580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.842612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.842764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.842796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.842977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.843009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.843224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.843256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.843481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.843525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.843720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.843753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.844053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.844085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.844268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.844300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.844482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.844515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.844774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.844806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.845109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.845141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.845269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.845298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.845501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.845532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.845750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.845782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.174 qpair failed and we were unable to recover it. 00:28:38.174 [2024-11-27 05:50:25.845979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.174 [2024-11-27 05:50:25.846011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.846265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.846295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.846583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.846614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.846816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.846849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.846983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.847013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.847211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.847243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.847546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.847577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.847862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.847895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.848042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.848074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.848380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.848412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.848610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.848642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.848928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.848961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.849088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.849118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.849341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.849374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.849706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.849739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.849896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.849928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.850129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.850161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.850474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.850506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.850781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.850815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.850971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.851003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.851307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.851339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.851542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.851574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.851805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.851837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.852041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.852073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.852232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.852262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.852581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.852862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.852894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.853096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.853127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.853273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.853304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.853559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.853592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.853820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.853859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.853990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.854018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.854197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.854229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.854421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.854454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.854732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.854766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.854914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.854946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.855208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.855242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.855428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.175 [2024-11-27 05:50:25.855460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.175 qpair failed and we were unable to recover it. 00:28:38.175 [2024-11-27 05:50:25.855599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.855628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.855915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.855948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.856150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.856181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.856419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.856451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.856727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.856760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.856961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.856992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.857254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.857286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.857548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.857580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.857842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.857875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.858177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.858209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.858503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.858535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.858782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.858816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.858967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.858999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.859124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.859156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.859369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.859401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.859659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.859706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.859974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.860006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.860228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.860259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.860520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.860553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.860815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.860848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.861105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.861136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.861441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.861474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.861723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.861756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.862024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.862056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.862316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.862347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.862600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.862633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.862907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.862940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.863168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.863199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.863470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.863502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.863726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.863759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.863915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.863947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.864154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.864186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.864509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.864763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.864796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.864997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.865029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.865223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.865254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.865532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.865566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.865684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.176 [2024-11-27 05:50:25.865715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.176 qpair failed and we were unable to recover it. 00:28:38.176 [2024-11-27 05:50:25.865908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.865941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.866219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.866251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.866536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.866568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.866778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.866811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.866992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.867024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.867283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.867315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.867523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.867556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.867754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.867788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.867990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.868021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.868168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.868200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.868392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.868425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.868697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.868730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.869010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.869041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.869194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.869225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.869501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.869533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.869798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.869831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.870134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.870166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.870376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.870408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.870593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.870625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.870807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.870840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.870984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.871015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.871231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.871263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.871575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.871609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.871844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.871878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.872151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.872183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.872501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.872534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.872693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.872725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.872927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.872960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.873165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.873197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.873519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.873550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.873747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.873780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.874034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.874066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.177 [2024-11-27 05:50:25.874253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.177 [2024-11-27 05:50:25.874285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.177 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.874592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.874856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.874894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.875087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.875118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.875337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.875368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.875646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.875684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.875973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.876005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.876155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.876185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.876433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.876465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.876658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.876700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.876951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.876983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.877238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.877270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.877526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.877559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.877868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.877901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.878112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.878143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.878394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.878426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.878708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.878742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.879008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.879040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.879319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.879352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.879641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.879692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.879847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.879879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.880044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.880077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.880270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.880302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.880495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.880526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.880821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.880855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.881106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.881137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.881429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.881461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.881723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.881757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.881955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.881986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.882154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.882186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.882431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.882765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.882798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.883002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.883034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.883230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.883262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.883461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.883493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.883770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.883803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.884091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.884124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.178 qpair failed and we were unable to recover it. 00:28:38.178 [2024-11-27 05:50:25.884425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.178 [2024-11-27 05:50:25.884458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.884654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.884703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.884859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.884889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.885082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.885113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.885392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.885424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.885604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.885641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.885859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.885891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.886086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.886118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.886362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.886393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.886644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.886685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.886957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.886990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.887186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.887216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.887507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.887539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.887692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.887724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.887977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.888009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.888144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.888174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.888398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.888429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.888641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.888685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.888889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.888921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.889224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.889257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.889569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.889600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.889879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.889913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.890139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.890171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.890404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.890436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.890699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.890732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.890976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.891012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.891160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.891192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.891441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.891474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.891729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.891762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.891882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.891913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.892063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.892095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.892311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.892343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.892549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.892582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.892726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.892758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.893034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.893067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.893405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.893437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.893723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.893757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.894012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.894044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.179 [2024-11-27 05:50:25.894309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.179 [2024-11-27 05:50:25.894340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.179 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.894612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.894643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.894804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.894836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.895033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.895065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.895336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.895367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.895625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.895655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.895941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.895973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.896120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.896163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.896296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.896326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.896542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.896573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.896723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.896756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.897007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.897042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.897300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.897330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.897597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.897628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.897848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.897881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.898145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.898177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.898482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.898514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.898740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.898773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.898893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.898925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.899105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.899136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.899473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.899505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.899773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.899806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.900008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.900040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.900236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.900268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.900545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.900577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.900777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.900809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.900929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.900960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.901172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.901204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.901508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.901539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.901809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.901841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.902039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.902071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.902306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.902337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.902596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.902627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.902808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.902843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.903031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.903063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.903254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.903286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.903600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.903745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.903779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.180 [2024-11-27 05:50:25.903966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.180 [2024-11-27 05:50:25.903997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.180 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.904138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.904169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.904408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.904440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.904634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.904667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.904828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.904860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.905057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.905089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.905295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.905327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.905530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.905561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.905864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.905897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.906103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.906140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.906272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.906304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.906556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.906588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.906817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.906851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.907034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.907065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.907196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.907229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.907527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.907559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.907763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.907796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.908001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.908032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.908295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.908325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.908649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.908690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.908920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.908952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.909098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.909130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.909361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.909393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.909623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.909656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.909936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.909967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.910254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.910286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.910398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.910430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.910690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.910723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.911031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.911063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.911336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.911368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.911644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.911685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.911892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.911923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.912204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.912237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.912464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.912495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.912782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.912816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.913093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.913128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.913417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.913450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.913695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.913728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.181 [2024-11-27 05:50:25.913939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.181 [2024-11-27 05:50:25.913970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.181 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.914278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.914310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.914510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.914542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.914826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.914859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.914994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.915026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.915230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.915261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.915448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.915479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.915685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.915718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.915949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.915982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.916176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.916208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.916484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.916516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.916720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.916758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.916911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.916942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.917203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.917235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.917433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.917465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.917618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.917650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.917866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.917899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.918082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.918114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.918257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.918289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.918570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.918602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.918888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.918921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.919134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.919167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.919431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.919462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.919666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.919707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.919855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.919888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.920047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.920079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.920299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.920330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.920513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.920544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.920820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.920852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.920977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.921010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.921323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.921354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.921631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.921663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.921885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.921917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.182 qpair failed and we were unable to recover it. 00:28:38.182 [2024-11-27 05:50:25.922055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.182 [2024-11-27 05:50:25.922086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.922291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.922322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.922575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.922608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.922920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.922952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.923206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.923238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.923519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.923552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.923838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.923872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.924098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.924129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.924379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.924411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.924691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.924724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.924876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.924908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.925060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.925091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.925281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.925312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.925587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.925619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.925908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.925942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.926072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.926103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.926286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.926318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.926592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.926624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.926918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.926957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.927221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.927252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.927488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.927520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.927726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.927759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.927980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.928012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.928260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.928292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.928557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.928590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.928790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.928823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.929089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.929120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.929370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.929401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.929706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.929739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.929968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.930001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.930133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.930163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.930470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.930502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.930784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.930817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.931039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.931071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.931369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.931401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.931517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.931549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.931748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.931780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.932055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.932086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.183 qpair failed and we were unable to recover it. 00:28:38.183 [2024-11-27 05:50:25.932220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.183 [2024-11-27 05:50:25.932252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.932528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.932560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.932839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.932872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.933158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.933191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.933473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.933504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.933807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.933839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.934108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.934140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.934432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.934464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.934653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.934693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.934974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.935005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.935268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.935300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.935626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.935658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.935847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.935879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.936096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.936128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.936313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.936344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.936611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.936642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.936701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c34b20 (9): Bad file descriptor 00:28:38.184 [2024-11-27 05:50:25.937117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.937195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.937541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.937618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.937884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.937921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.938075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.938108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.938336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.938379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.938610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.938643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.938893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.938927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.939197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.939230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.939474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.939507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.939705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.939739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.939993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.940026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.940325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.940359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.940631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.940664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.940892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.940925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.941198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.941231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.941368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.941400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.941686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.941720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.941983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.942026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.942312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.942345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.942617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.942650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.942870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.942903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.943204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.184 [2024-11-27 05:50:25.943236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.184 qpair failed and we were unable to recover it. 00:28:38.184 [2024-11-27 05:50:25.943500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.943532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.943835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.943869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.944133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.944166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.944290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.944321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.944542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.944574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.944769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.944802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.945056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.945088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.945391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.945423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.945627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.945659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.945956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.945991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.946286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.946319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.946515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.946547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.946801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.946835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.947042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.947075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.947350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.947382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.947576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.947609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.947803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.947836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.948090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.948123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.948420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.948453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.948724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.948757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.948984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.949015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.949208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.949239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.949436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.949469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.949664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.949704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.949979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.950013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.950208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.950240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.950434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.950466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.950680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.950714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.950996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.951313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.951346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.951548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.951579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.951835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.951869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.952065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.952098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.952376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.952409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.952699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.952732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.953007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.953046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.953329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.953362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.953563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.185 [2024-11-27 05:50:25.953595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.185 qpair failed and we were unable to recover it. 00:28:38.185 [2024-11-27 05:50:25.953872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.953906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.954103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.954135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.954319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.954351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.954625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.954657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.954870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.954902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.955104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.955137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.955445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.955478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.955712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.955745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.956002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.956035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.956289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.956323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.956520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.956553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.956835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.956868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.957086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.957118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.957395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.957428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.957580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.957611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.957896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.957931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.958133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.958164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.958389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.958422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.958728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.958762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.958988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.959021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.959204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.959237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.959464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.959496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.959694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.959727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.959926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.959960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.960244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.960322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.960534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.960572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.960836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.960872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.961152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.961184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.961472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.961504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.961730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.961763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.961960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.961992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.962174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.962205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.962484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.962517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.962700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.962733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.962999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.963031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.963159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.963191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.963370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.963402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.963628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.186 [2024-11-27 05:50:25.963678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.186 qpair failed and we were unable to recover it. 00:28:38.186 [2024-11-27 05:50:25.963979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.964011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.964288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.964320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.964519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.964550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.964808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.964841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.965116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.965148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.965427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.965458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.965745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.965778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.965978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.966011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.966207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.966239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.966383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.966416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.966694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.966728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.966917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.966950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.967218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.967249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.967507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.967539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.967814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.967846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.968042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.968075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.968271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.968302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.968497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.968529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.968814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.968847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.969121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.969154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.969441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.969473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.969726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.969758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.970025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.970057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.970350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.970381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.970599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.970630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.970853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.970886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.971152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.971230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.971577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.971654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.972017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.972093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.972412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.972449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.972707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.972741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.973039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.187 [2024-11-27 05:50:25.973071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.187 qpair failed and we were unable to recover it. 00:28:38.187 [2024-11-27 05:50:25.973363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.973395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.973598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.973630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.973821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.973853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.974104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.974136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.974331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.974363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.974545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.974577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.974854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.974887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.975140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.975171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.975488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.975520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.975718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.975752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.975949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.975980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.976254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.976285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.976577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.976608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.976808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.976840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.977041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.977073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.977223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.977254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.977527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.977559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.977698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.977730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.977930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.977963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.978242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.978272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.978490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.978523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.978783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.978817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.979120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.979151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.979420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.979452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.979685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.979718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.979999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.980031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.980234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.980267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.980455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.980487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.980769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.980802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.981021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.981054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.981234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.981265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.981541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.981573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.981794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.981828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.982131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.982163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.982428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.982465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.982748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.982781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.983057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.983088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.983291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.188 [2024-11-27 05:50:25.983323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.188 qpair failed and we were unable to recover it. 00:28:38.188 [2024-11-27 05:50:25.983588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.983620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.983738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.983772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.983989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.984021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.984359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.984391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.984615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.984647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.984864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.984897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.985173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.985205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.985458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.985490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.985716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.985749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.986008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.986041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.986348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.986381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.986572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.986604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.986793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.986826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.987102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.987133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.987315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.987348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.987552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.987583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.987840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.987873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.988070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.988103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.988362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.988393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.988647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.988686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.988941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.988973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.989156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.989188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.989384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.989416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.989616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.989649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.989845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.989876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.990151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.990183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.990364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.990396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.990682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.990715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.990936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.990967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.991157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.991189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.991463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.991495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.991788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.991821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.992116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.992148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.992393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.992424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.992702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.992734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.992984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.993016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.993217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.993254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.993379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.189 [2024-11-27 05:50:25.993411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.189 qpair failed and we were unable to recover it. 00:28:38.189 [2024-11-27 05:50:25.993629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.993660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.993885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.993917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.994138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.994169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.994366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.994398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.994596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.994626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.994862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.994894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.995197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.995227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.995461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.995494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.995750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.995782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.995983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.996015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.996291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.996323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.996614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.996646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.996873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.996905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.997121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.997154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.997359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.997390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.997645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.997697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.997988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.998020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.998219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.998251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.998430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.998462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.998746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.998778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.999048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.999080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.999283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.999314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.999570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.999602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:25.999787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:25.999820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.000021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.000054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.000313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.000346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.000623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.000655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.000860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.000892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.001193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.001226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.001425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.001456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.001651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.001694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.001962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.001994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.002178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.002209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.002491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.002522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.002799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.002832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.003088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.003120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.003319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.003350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.190 [2024-11-27 05:50:26.003550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.190 [2024-11-27 05:50:26.003583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.190 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.003777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.003815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.004065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.004098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.004292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.004324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.004457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.004488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.004721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.004753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.005062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.005093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.005374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.005406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.005594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.005625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.005779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.005811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.005993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.006025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.006316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.006348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.006542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.006574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.006758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.006791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.007063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.007096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.007356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.007389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.007692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.007725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.008018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.008049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.008278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.008311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.008512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.008543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.008833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.009165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.009197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.009466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.009498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.009688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.009720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.009903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.009934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.010210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.010242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.010501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.010532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.010791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.010824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.010956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.010988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.011250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.011281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.011582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.011614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.011846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.011879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.012139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.012171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.012474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.012505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.191 qpair failed and we were unable to recover it. 00:28:38.191 [2024-11-27 05:50:26.012792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.191 [2024-11-27 05:50:26.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.013129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.013161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.013355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.013386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.013570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.013602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.013791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.013823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.014077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.014109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.014408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.014440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.014713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.014752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.014965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.014997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.015199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.015231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.015483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.015514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.015770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.015802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.016100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.016132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.016402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.016433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.016729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.016762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.016974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.017006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.017210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.017241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.017488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.017712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.017745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.018032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.018064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.018324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.018355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.018662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.018704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.018982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.019014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.019296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.019327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.019616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.019647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.019860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.019893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.020194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.020225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.020510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.020542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.020793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.020826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.021020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.021052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.021233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.021263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.021406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.021439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.021721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.021753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.021983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.022015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.022222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.022254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.022435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.022591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.022622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.022904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.022937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.192 qpair failed and we were unable to recover it. 00:28:38.192 [2024-11-27 05:50:26.023244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.192 [2024-11-27 05:50:26.023276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.023476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.023509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.023738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.023771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.023967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.023999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.024305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.024337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.024635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.024667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.024957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.024988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.025267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.025298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.025497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.025527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.025724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.025763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.026038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.026070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.026351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.026383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.026694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.026728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.026933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.026965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.027245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.027277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.027483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.027515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.027710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.027743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.028019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.028051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.028253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.028285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.028510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.028542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.028732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.028766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.029046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.029077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.029296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.029328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.029523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.029555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.029810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.029843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.030096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.030129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.030350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.030382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.030644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.030685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.030984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.031016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.031265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.031297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.031478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.031510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.031763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.031796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.032094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.032127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.032421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.032453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.032681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.032714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.032918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.032949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.033209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.033241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.033432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.033463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.193 [2024-11-27 05:50:26.033577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.193 [2024-11-27 05:50:26.033608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.193 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.033890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.033922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.034201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.034233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.034372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.034403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.034687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.034721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.034998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.035029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.035286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.035317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.035593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.035624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.035890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.035923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.036057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.036089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.036300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.036332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.036606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.036642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.036797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.036831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.037000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.037281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.037313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.037464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.037495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.037798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.037831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.038116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.038148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.038334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.038364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.038585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.038617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.038822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.038854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.039155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.039186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.039451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.039482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.039734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.039766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.040041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.040074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.040386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.040418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.040692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.040725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.040868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.040898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.041097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.041127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.041431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.041463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.041642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.041694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.041982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.042015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.042316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.042347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.042611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.042643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.042879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.042912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.043110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.043140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.043358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.043390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.043610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.194 [2024-11-27 05:50:26.043641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.194 qpair failed and we were unable to recover it. 00:28:38.194 [2024-11-27 05:50:26.043839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.043871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.044067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.044098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.044379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.044411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.044694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.044727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.044955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.044987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.045289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.045321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.045589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.045621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.045912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.045944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.046092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.046124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.046311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.046342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.046593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.046625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.046884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.046917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.047170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.047202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.047397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.047712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.047744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.048011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.048043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.048332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.048363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.048500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.048530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.048733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.048765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.049042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.049073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.049291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.049322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.049605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.049637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.049856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.050166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.050198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.050376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.050407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.050690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.050723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.050928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.050959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.051218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.051251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.051548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.051580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.051790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.051823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.052107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.052139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.052335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.052366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.052573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.052604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.052795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.052827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.053086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.053117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.053310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.053342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.053541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.053573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.053762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.195 [2024-11-27 05:50:26.053794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.195 qpair failed and we were unable to recover it. 00:28:38.195 [2024-11-27 05:50:26.053938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.053970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.054152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.054183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.054513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.054546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.054826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.054858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.055146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.055178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.055461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.055492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.055777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.055809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.055994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.056025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.056221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.056252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.056433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.056465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.056645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.056684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.056963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.056995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.057264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.057296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.057589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.057620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.057818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.057850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.057981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.058018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.058220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.058251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.058527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.058559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.058780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.058812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.058999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.059030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.059307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.059338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.059543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.059575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.059758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.059791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.059973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.060005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.060285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.060317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.060590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.060622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.060915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.060948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.061224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.061256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.061479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.061510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.061703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.061735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.061870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.061901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.062103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.062133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.062408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.062441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.196 [2024-11-27 05:50:26.062648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.196 [2024-11-27 05:50:26.062690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.196 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.062970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.063001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.063181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.063212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.063491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.063523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.063667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.063710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.063991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.064023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.064323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.064354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.064553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.064585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.064780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.064813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.065091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.065122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.065317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.065348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.065609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.065640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.065941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.065973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.066195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.066227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.066531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.066562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.066827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.066861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.067146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.067178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.067483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.067514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.067780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.067812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.068104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.068135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.068434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.068465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.068691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.068724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.068998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.069034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.069259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.069291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.069573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.069604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.069834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.069868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.070140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.070171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.070421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.070454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.070758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.070791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.070999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.071031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.071225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.071257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.071510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.071542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.071845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.071878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.072092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.072124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.072377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.072409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.072711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.072743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.072967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.072999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.073280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.073311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.197 qpair failed and we were unable to recover it. 00:28:38.197 [2024-11-27 05:50:26.073525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.197 [2024-11-27 05:50:26.073557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.073742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.073775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.074048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.074079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.074359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.074390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.074691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.074725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.074997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.075028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.075235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.075267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.075449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.075480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.075661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.075702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.075932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.075964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.076164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.076196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.076405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.076437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.076702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.076733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.076985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.077017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.077302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.077333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.077634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.077666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.077820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.077852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.078132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.078163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.078459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1922956 Killed "${NVMF_APP[@]}" "$@" 00:28:38.198 [2024-11-27 05:50:26.078494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.078752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.078785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.078992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.079024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.079204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.079237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:38.198 [2024-11-27 05:50:26.079514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.079548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:38.198 [2024-11-27 05:50:26.079745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.079781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.079925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.079957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.198 [2024-11-27 05:50:26.080177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.080212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.198 [2024-11-27 05:50:26.080484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.080519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.198 [2024-11-27 05:50:26.080803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.080838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.081118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.081152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.081373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.081406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.081601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.081634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.081946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.081979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.082240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.082272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.082565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.082762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.082795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.198 qpair failed and we were unable to recover it. 00:28:38.198 [2024-11-27 05:50:26.083075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.198 [2024-11-27 05:50:26.083107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.083361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.083393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.083595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.083631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.083835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.083868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.084050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.084082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.084361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.084393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.084612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.084643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.084801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.084833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.085087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.085120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.085260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.085291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.085483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.085517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.085797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.085830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.085987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.086019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.086152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.086196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.086329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.086360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.086637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.086684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.086988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.087019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.087269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.087301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.087574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1923891 00:28:38.199 [2024-11-27 05:50:26.087608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.087899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.087933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1923891 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:38.199 [2024-11-27 05:50:26.088083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.088115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1923891 ']' 00:28:38.199 [2024-11-27 05:50:26.088371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.088412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.088544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.088576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.199 [2024-11-27 05:50:26.088758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.088794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.199 [2024-11-27 05:50:26.089113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.089148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.199 [2024-11-27 05:50:26.089409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.089444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.199 [2024-11-27 05:50:26.089725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.089760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.199 [2024-11-27 05:50:26.089981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.090015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.090294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.090326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.090540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.090573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.199 qpair failed and we were unable to recover it. 00:28:38.199 [2024-11-27 05:50:26.090713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.199 [2024-11-27 05:50:26.090747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.091005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.091040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.091236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.091267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.091553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.091586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.091815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.091849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.091995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.092039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.092274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.092308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.092501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.092533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.092821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.092855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.093111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.093144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.093449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.093482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.093712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.093744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.093945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.093981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.094194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.094226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.094429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.094466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.094764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.094797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.095097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.095130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.095273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.095304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.095515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.095548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.095857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.095890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.096081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.096113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.096412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.096444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.096722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.096755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.096958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.096991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.097209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.097241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.097369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.097400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.097622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.097655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.097924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.097957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.098213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.098245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.098497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.098529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.098720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.098754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.099043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.099076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.099339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.099416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.200 [2024-11-27 05:50:26.099724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.200 [2024-11-27 05:50:26.099764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.200 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.100051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.100085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.100356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.100388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.100612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.100643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.100940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.100975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.101123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.101155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.101356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.101389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.101598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.101630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.101960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.101994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.102280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.102312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.102507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.102542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.102757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.102791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.103048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.103092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.103346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.103378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.103605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.103637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.103852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.103884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.104079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.104110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.104302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.104334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.104625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.104657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.104972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.105003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.105132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.105164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.105365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.105399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.105636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.105669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.105935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.105970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.106153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.106184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.106312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.106344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.106605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.106638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.106908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.106941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.107086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.107117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.107306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.201 [2024-11-27 05:50:26.107337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.201 qpair failed and we were unable to recover it. 00:28:38.201 [2024-11-27 05:50:26.107588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.107619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.107934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.107969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.108252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.108284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.108489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.108521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.108721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.108754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.108944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.108976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.109188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.109220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.109426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.109458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.109737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.109771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.110060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.110139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.110357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.110393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.110541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.110573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.110716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.110751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.111036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.111069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.111202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.111235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.111418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.111450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.111684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.111719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.111860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.111892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.112197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.112230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.112426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.112457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.112746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.112780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.112980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.113014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.113140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.113181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.113486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.113519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.113756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.113790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.113934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.113966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.114170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.114205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.114330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.114362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.114629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.114663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.114861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.114895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.115165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.115199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.115473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.115506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.115801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.115835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.116088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.116121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.116381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.116415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.116600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.202 [2024-11-27 05:50:26.116636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.202 qpair failed and we were unable to recover it. 00:28:38.202 [2024-11-27 05:50:26.116876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.116909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.117026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.117059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.117340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.117373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.117505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.117743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.117778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.117905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.117937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.118250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.118282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.118481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.118515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.118737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.118770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.119026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.119059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.119247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.119280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.119476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.119512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.119699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.119731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.120066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.120144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.120309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.120346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.120555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.120587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.120791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.120824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.120971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.121005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.121195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.121225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.121481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.121513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.121796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.121829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.121951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.121982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.122237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.122269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.122430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.122462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.122636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.122667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.122873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.122906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.123126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.123167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.123352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.123384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.123533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.123566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.123753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.123788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.123991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.124024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.124209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.124241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.124390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.124424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.203 [2024-11-27 05:50:26.124602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.203 [2024-11-27 05:50:26.124634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.203 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.124906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.125081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.125114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.125291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.125323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.125601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.125633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.125841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.125874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.126089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.126121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.126329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.126362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.126609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.126640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.126935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.127010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.127339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.127376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.127568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.127600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.127898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.127933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.128076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.128109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.128303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.128336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.128616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.128648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.128939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.128972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.129180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.129212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.129404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.129437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.129638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.129680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.129919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.129953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.130098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.130130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.130313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.130345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.130550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.130583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.130769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.130802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.131015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.131048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.131246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.131279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.131473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.131505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.131701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.131734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.131954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.131985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.132118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.132151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.132402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.132434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.132668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.132711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.132915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.132948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.133158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.133190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.133399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.133431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.133650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.133691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.204 [2024-11-27 05:50:26.133936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.204 [2024-11-27 05:50:26.133969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.204 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.134178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.134211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.134408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.134441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.134641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.134687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.134817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.134851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.135049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.135081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.135293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.135326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.135446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.135478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.135666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.135711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.135908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.135940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.136150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.136182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.136377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.136410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.136553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.136586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.136804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.136838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.137094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.137127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.137313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.137345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.137463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.137495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.137612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.137644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.137849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.137881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.138105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.138138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.138340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.138372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.138622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.138655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.138868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.138899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.139166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.139205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.139454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.139486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.139696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.139738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.140010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.140043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.140304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.140336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.140558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.140590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.140856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.140890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.141012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.141044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.141244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.141275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.141583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.141616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.205 qpair failed and we were unable to recover it. 00:28:38.205 [2024-11-27 05:50:26.141905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.205 [2024-11-27 05:50:26.141937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.142138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.142172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.142378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.142411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.142600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.142617] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:28:38.206 [2024-11-27 05:50:26.142632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.142691] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.206 [2024-11-27 05:50:26.142868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.142903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.143128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.143160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.143355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.143385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.143573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.143605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.143820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.143854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.144079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.144112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.144322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.144355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.144556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.144589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.144810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.144845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.145041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.145075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.145292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.145325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.145457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.145490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.145693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.145727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.145922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.145956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.146147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.146181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.146379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.146413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.146609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.146642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.146846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.146880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.147086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.147118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.147325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.147366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.147546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.147578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.147755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.147788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.147896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.147928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.148138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.148169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.148388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.148419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.148625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.148658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.148796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.148828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.148952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.148983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.206 [2024-11-27 05:50:26.149162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.206 [2024-11-27 05:50:26.149194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.206 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.149463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.149495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.149773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.149806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.150079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.150112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.150248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.150280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.150481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.150759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.150792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.150902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.150934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.151118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.151151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.151329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.151362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.151612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.151651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.151850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.151882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.152081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.152114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.152257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.152290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.152560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.152592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.152786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.152820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.153010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.153042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.153261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.153294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.153580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.153614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.153903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.153936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.154154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.154188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.154396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.154428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.154572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.154624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.154851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.154885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.207 [2024-11-27 05:50:26.155028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.207 [2024-11-27 05:50:26.155061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.207 qpair failed and we were unable to recover it. 00:28:38.486 [2024-11-27 05:50:26.155331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.486 [2024-11-27 05:50:26.155364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.486 qpair failed and we were unable to recover it. 00:28:38.486 [2024-11-27 05:50:26.155510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.486 [2024-11-27 05:50:26.155544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.486 qpair failed and we were unable to recover it. 00:28:38.486 [2024-11-27 05:50:26.155741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.155774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.155968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.156001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.156132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.156165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.156353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.156386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.156598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.156632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.156840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.156874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.157014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.157047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.157158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.157189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.157371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.157404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.157605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.157639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.157837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.157912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.158106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.158141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.158364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.158397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.158650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.158694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.158899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.158932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.159068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.159101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.159227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.159258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.159508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.159540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.159824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.159856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.160043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.160076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.160213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.160246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.160436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.160468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.160716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.160750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.160996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.161038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.161174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.161207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.161383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.161415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.161541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.161573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.161700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.161736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.161927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.161958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.162083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.162115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.162312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.162345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.162488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.162520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.162772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.162806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.162998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.163033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.163243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.163275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.163461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.163494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.163682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.487 [2024-11-27 05:50:26.163716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.487 qpair failed and we were unable to recover it. 00:28:38.487 [2024-11-27 05:50:26.164000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.164042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.164233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.164266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.164479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.164511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.164740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.164772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.164982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.165016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.165121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.165153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.165348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.165571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.165603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.165866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.165899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.166013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.166046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.166174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.166398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.166429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.166616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.166647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.166880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.166914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.167033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.167065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.167194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.167227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.167412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.167445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.167571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.167603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.167711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.167744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.167945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.167977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.168157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.168190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.168464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.168496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.168696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.168732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.168984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.169016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.169155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.169187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.169330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.169363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.169579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.169618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.169841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.169874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.170137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.170169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.170416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.170449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.170642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.170681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.170855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.170888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.171094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.171126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.171329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.171362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.171480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.171512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.171631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.171662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.171867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.171900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.488 [2024-11-27 05:50:26.172036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.488 [2024-11-27 05:50:26.172068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.488 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.172258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.172290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.172396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.172427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.172569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.172601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.172826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.172861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.173055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.173088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.173278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.173311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.173440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.173472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.173660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.173704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.173824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.173855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.174053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.174085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.174387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.174419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.174641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.174681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.174860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.174892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.175072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.175103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.175300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.175332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.175593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.175626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.175848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.175881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.176010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.176051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.176310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.176342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.176569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.176602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.176717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.176750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.176948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.176981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.177168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.177200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.177463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.177498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.177708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.177740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.177925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.177957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.178145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.178177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.178401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.178432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.178624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.178661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.178846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.178878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.179101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.179133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.179325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.179356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.179475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.179507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.179639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.179678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.179800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.179833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.180004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.180035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.180233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.489 [2024-11-27 05:50:26.180265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.489 qpair failed and we were unable to recover it. 00:28:38.489 [2024-11-27 05:50:26.180471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.180505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.180714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.180749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.180974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.181007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.181275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.181307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.181566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.181599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.181715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.181748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.181950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.181982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.182175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.182206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.182385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.182415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.182691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.182726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.183014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.183045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.183337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.183369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.183586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.183618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.183841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.183872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.184049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.184080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.184197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.184227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.184346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.184376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.184577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.184625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.184983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.185056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.185203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.185239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.185483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.185515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.185701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.185736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.185855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.185886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.186083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.186115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.186312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.186343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.186534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.186565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.186771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.186804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.187098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.187131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.187263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.187294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.187578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.187610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.187741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.187775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.187903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.187943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.188071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.188103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.188376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.188408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.188579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.188611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.188828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.188862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.189103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.189134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.189405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.490 [2024-11-27 05:50:26.189437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.490 qpair failed and we were unable to recover it. 00:28:38.490 [2024-11-27 05:50:26.189733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.189767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.189974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.190006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.190144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.190175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.190368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.190400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.190575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.190606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.190846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.190879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.191085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.191117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.191244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.191276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.191423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.191454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.191569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.191601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.191808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.191840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.192046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.192078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.192274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.192306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.192499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.192530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.192683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.192718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.192920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.192952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.193141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.193172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.193352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.193384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.193505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.193536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.193655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.193697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.193937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.194009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.194318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.194353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.194474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.194505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.194757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.194792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.194971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.195002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.195136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.195168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.195289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.491 [2024-11-27 05:50:26.195319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.491 qpair failed and we were unable to recover it. 00:28:38.491 [2024-11-27 05:50:26.195583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.195616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.195753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.195786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.195921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.195952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.196128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.196159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.196280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.196311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.196491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.196522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.196634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.196666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.196867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.196899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.197033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.197064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.197240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.197272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.197534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.197565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.197780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.197813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.197988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.198020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.198196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.198229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.198335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.198366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.198500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.198532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.198766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.198799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.198993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.199024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.199193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.199224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.199409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.199442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.199617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.199655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.199798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.199830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.200012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.200042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.200282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.200313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.200490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.200520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.200707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.200740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.200911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.200942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.201142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.201176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.201365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.201395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.201570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.201601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.201723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.201755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.201931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.201962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.202172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.202203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.202328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.202359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.202631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.202663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.202943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.202980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.203175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.203206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.492 [2024-11-27 05:50:26.203454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.492 [2024-11-27 05:50:26.203486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.492 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.203699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.203731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.204002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.204033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.204146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.204178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.204371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.204402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.204520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.204551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.204786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.204820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.205063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.205094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.205273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.205304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.205511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.205543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.205740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.205786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.205995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.206026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.206226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.206257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.206359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.206389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.206577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.206608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.206811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.206844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.207087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.207118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.207309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.207340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.207472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.207504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.207708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.207746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.207980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.208050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.208291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.208326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.208508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.208540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.208768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.208803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.209018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.209050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.209181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.209212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.209336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.209368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.209480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.209511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.209776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.209811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.210020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.210051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.210314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.210346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.210534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.210565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.210707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.210740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.210968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.211000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.211137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.211169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.211411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.211442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.211625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.211657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.211854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.211893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.212078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.493 [2024-11-27 05:50:26.212109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.493 qpair failed and we were unable to recover it. 00:28:38.493 [2024-11-27 05:50:26.212371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.212402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.212533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.212564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.212755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.212788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.212971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.213002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.213291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.213322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.213441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.213473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.213643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.213684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.213824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.213855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.214118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.214149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.214267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.214298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.214417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.214448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.214567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.214599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.214789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.214822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.215011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.215059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.215256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.215287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.215546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.215578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.215767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.215799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.215936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.215967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.216217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.216249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.216437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.216467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.216657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.216700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.216876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.216908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.217183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.217214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.217385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.217417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.217609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.217640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.217857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.217890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.217993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.218025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.218198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.218229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.218480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.218512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.218633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.218664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.218855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.218887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.219092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.219124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.219329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.219360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.219565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.219596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.219860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.219893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.220118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.220149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.220338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.494 [2024-11-27 05:50:26.220369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.494 qpair failed and we were unable to recover it. 00:28:38.494 [2024-11-27 05:50:26.220583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.220614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.220892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.220931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.221124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.221156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.221387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.221419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.221613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.221645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.221771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.221804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.222004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.222035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.222218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.222250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.222441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.222472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.222593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.222625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.222814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.222847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.223022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.223053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.223274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.223305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.223427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.223458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.223636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.223668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.223896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.223928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.224116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.224148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.224431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.224463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.224653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.224706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.224890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.224921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.225057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.225089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.225352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.225384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.225584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.225615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.225861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.225894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.226163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.226194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.226410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.226442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.226639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.226690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.226875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.226916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.227057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.227088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.227332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.227363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.227623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.227655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.227781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.227813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.228016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.228048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.228228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.228259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.228400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.495 [2024-11-27 05:50:26.228500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.228532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.228776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.228809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.228941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.228972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.229088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.495 [2024-11-27 05:50:26.229120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.495 qpair failed and we were unable to recover it. 00:28:38.495 [2024-11-27 05:50:26.229243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.229274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.229473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.229505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.229689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.229723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.229927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.229959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.230219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.230251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.230435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.230470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.230662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.230702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.230959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.230990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.231195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.231226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.231417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.231450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.231714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.231747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.231937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.231969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.232230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.232263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.232469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.232501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.232713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.232747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.232920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.232951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.233058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.233094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.233334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.233367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.233619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.233655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.233855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.233887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.234012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.234044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.234228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.234260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.234396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.234428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.234690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.234723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.234908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.234941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.235127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.235158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.235371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.235402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.235668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.235708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.235821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.235853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.236071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.236103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.236305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.236337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.236477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.236509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.496 [2024-11-27 05:50:26.236747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.496 [2024-11-27 05:50:26.236779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.496 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.236898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.236930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.237107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.237140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.237255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.237287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.237513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.237545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.237737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.237770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.237948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.237979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.238107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.238139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.238332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.238364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.238547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.238578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.238761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.238794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.238991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.239025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.239205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.239236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.239476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.239509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.239694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.239728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.239904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.239937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.240160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.240194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.240376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.240409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.240602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.240633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.240907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.240940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.241210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.241241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.241418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.241448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.241689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.241722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.241995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.242025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.242139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.242177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.242350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.242381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.242621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.242652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.242872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.242904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.243090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.243122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.243367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.243398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.243601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.243633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.243835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.243868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.243979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.244010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.244137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.244168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.244288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.244320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.244488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.244520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.244713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.244754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.245021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.245053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.497 [2024-11-27 05:50:26.245208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.497 [2024-11-27 05:50:26.245242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.497 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.245352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.245383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.245578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.245609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.245734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.245768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.245945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.245977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.246262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.246293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.246418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.246448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.246630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.246661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.246784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.246817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.247097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.247128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.247329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.247361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.247476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.247516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.247705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.247737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.247933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.247979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.248161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.248193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.248322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.248355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.248566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.248600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.248780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.248815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.248991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.249024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.249236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.249268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.249460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.249493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.249717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.249751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.249943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.249976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.250151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.250191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.250386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.250418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.250607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.250639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.250854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.250895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.251158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.251190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.251373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.251405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.251652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.251693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.251890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.251923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.252102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.252135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.252318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.252350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.252475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.252508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.252707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.252744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.252884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.252917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.253097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.253128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.253302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.253334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.498 [2024-11-27 05:50:26.253590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.498 [2024-11-27 05:50:26.253622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.498 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.253834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.253867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.254062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.254095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.254213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.254244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.254458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.254489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.254681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.254714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.254944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.255012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.255253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.255297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.255519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.255567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.255801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.255836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.255977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.256010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.256131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.256163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.256363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.256394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.256581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.256613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.256850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.256887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.257020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.257054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.257250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.257282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.257570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.257601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.257732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.257773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.257955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.257986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.258187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.258219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.258406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.258439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.258735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.258767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.258990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.259021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.259158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.259190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.259310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.259341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.259551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.259582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.259700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.259738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.259925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.259962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.260097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.260129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.260317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.260349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.260542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.260573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.260788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.260822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.260955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.260985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.261225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.261256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.261369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.261400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.261531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.261561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.261822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.261854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.499 [2024-11-27 05:50:26.262117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.499 [2024-11-27 05:50:26.262149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.499 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.262276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.262307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.262426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.262456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.262627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.262657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.262791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.262822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.263066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.263097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.263212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.263243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.263431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.263463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.263677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.263710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.263997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.264028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.264142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.264173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.264310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.264341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.264513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.264545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.264721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.264754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.264883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.264915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.265041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.265072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.265197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.265228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.265462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.265506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.265643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.265685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.265954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.265986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.266116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.266148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.266339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.266378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.266563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.266596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.266855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.266889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.267060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.267091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.267371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.267405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.267589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.267621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.267761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.267794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.268014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.268046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.268181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.268213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.268346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.268385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.268591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.268626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.268826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.268859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.268965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.268996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.500 qpair failed and we were unable to recover it. 00:28:38.500 [2024-11-27 05:50:26.269200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.500 [2024-11-27 05:50:26.269232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.269434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.269439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.501 [2024-11-27 05:50:26.269466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.501 [2024-11-27 05:50:26.269466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b9[2024-11-27 05:50:26.269475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the0 with addr=10.0.0.2, port=4420 00:28:38.501 only 00:28:38.501 [2024-11-27 05:50:26.269485] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.501 [2024-11-27 05:50:26.269490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.269668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.269823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.269853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.270064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.270097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.270381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.270412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.270533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.270563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.270751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.270782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.271049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.271080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.271200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.271232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.271148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:38.501 [2024-11-27 05:50:26.271256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:38.501 [2024-11-27 05:50:26.271362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:38.501 [2024-11-27 05:50:26.271363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:38.501 [2024-11-27 05:50:26.271509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.271540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.271730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.271763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.271941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.271974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.272148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.272180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.272350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.272381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.272563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.272594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.272775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.272807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.273077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.273111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.273378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.273410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.273688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.273721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.273929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.273966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.274146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.274177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.274415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.274448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.274638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.274677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.274933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.274966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.275143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.275175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.275367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.275403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.275512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.275544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.275815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.275848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.276042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.276074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.276258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.276290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.276465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.276496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.276623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.276655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.276851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.276890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.277083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.501 [2024-11-27 05:50:26.277114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.501 qpair failed and we were unable to recover it. 00:28:38.501 [2024-11-27 05:50:26.277285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.277328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.277533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.277564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.277693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.277726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.277908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.277940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.278060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.278092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.278267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.278300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.278490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.278523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.278749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.278783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.278900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.278933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.279117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.279149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.279335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.279368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.279509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.279542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.279688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.279723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.279913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.279946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.280214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.280248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.280368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.280409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.280610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.280644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.280868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.280901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.281033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.281066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.281257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.281290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.281474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.281506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.281705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.281740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.281916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.281949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.282086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.282120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.282384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.282417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.282608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.282640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.282962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.283008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.283221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.283259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.283440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.283482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.283679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.283714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.283898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.283930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.284119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.284153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.284284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.284316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.284509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.284542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.284724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.284758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.284948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.284980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.285089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.285119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.285301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.502 [2024-11-27 05:50:26.285333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.502 qpair failed and we were unable to recover it. 00:28:38.502 [2024-11-27 05:50:26.285479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.285519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.285768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.285812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.285931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.286226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.286260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.286430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.286462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.286578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.286609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.286738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.286772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.286969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.287000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.287240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.287273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.287464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.287497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.287687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.287720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.287906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.287937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.288110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.288143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.288332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.288364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.288484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.288516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.288764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.288799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.289087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.289119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.289299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.289330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.289602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.289635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.289918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.289951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.290153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.290429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.290462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.290724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.291048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.291081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.291271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.291303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.291492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.291524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.291703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.291736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.291985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.292019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.292310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.292343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.292613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.292645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.292923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.292958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.293186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.293218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.293391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.293424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.293737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.293990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.294023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.294288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.294322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.294462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.294493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.294723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.503 [2024-11-27 05:50:26.294757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.503 qpair failed and we were unable to recover it. 00:28:38.503 [2024-11-27 05:50:26.294942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.294973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.295146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.295178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.295442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.295481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.295755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.295786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.296063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.296096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.296306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.296339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.296628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.296661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.296928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.296960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.297145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.297178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.297414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.297448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.297664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.297719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.298030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.298062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.298280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.298313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.298613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.298645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.298927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.298960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.299143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.299175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.299462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.299496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.299770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.299805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.300083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.300117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.300388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.300420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.300738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.300796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.300999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.301030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.301242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.301275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.301544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.301575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.301839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.301871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.302054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.302086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.302219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.302250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.302434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.302466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.302657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.302696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.302839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.302871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.303062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.303093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.303348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.303380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.303497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.303529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.303736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.303767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.303947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.304190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.304222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.304403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.304434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.304606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.304637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.504 qpair failed and we were unable to recover it. 00:28:38.504 [2024-11-27 05:50:26.304869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.504 [2024-11-27 05:50:26.304909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.305035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.305067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.305258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.305290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.305462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.305495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.305633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.305699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.305920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.305954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.306151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.306186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.306438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.306475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.306717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.306775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.306968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.307003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.307247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.307283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.307477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.307510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.307761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.307980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.308012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.308226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.308257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.308389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.308420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.308603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.308635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.308771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.308804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.308943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.308974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.309236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.309266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.309464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.309496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.309687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.309718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.310027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.310058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.310321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.310352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.310644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.310684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.310797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.310829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.311014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.311048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.311311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.311345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.311531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.311563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.311826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.311861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.312033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.312064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.312385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.312446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.312682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.505 [2024-11-27 05:50:26.312728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.505 qpair failed and we were unable to recover it. 00:28:38.505 [2024-11-27 05:50:26.313007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.313038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.313322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.313353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.313611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.313643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.313833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.313865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.314070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.314102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.314338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.314370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.314632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.314663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.314958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.314990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.315253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.315544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.315576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.315762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.315795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.315927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.315967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.316163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.316194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.316381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.316414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.316609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.316641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.316924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.316963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.317137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.317169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.317453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.317484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.317693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.317726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.317992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.318023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.318252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.318283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.318551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.318582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.318869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.318902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.319168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.319200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.319414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.319445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.319634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.319666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.319938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.319969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.320162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.320193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.320453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.320483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.320663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.320701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.320907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.320939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.321185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.321215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.321343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.321374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.321560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.321591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.321826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.321858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.322029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.322059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.322244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.322275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.322486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.322518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.506 qpair failed and we were unable to recover it. 00:28:38.506 [2024-11-27 05:50:26.322779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.506 [2024-11-27 05:50:26.322817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.322998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.323030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.323200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.323232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.323499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.323530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.323732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.323765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.323948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.323979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.324163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.324193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.324408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.324439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.324685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.324718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.324983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.325015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.325236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.325267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.325456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.325487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.325726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.325759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.325951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.325982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.326273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.326304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.326549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.326580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.326762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.326795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.327033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.327065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.327271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.327301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.327437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.327469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.327707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.327739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.327979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.328010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.328244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.328276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.328455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.328486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.328737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.328771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.328963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.328996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.329259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.329290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.329579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.329615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.329852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.329885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.330128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.330158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.330343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.330374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.330660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.330702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.330942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.330972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.331163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.331194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.331380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.331410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.331611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.331642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.331841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.331880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.332065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.332096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.507 [2024-11-27 05:50:26.332265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.507 qpair failed and we were unable to recover it. 00:28:38.507 [2024-11-27 05:50:26.332503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.332533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.332821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.332857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.333072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.333104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.333364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.333395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.333576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.333607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.333845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.333876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.334138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.334169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.334381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.334412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.334647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.334689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.334858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.334888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.335001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.335032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.335168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.335199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.335373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.335404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.335582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.335612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.335754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.335787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.336053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.336089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.336291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.336323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.336585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.336616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.336910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.336943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.337139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.337170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.337406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.337438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.337624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.337662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.337867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.337898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.338164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.338198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.338462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.338492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.338807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.338840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.339035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.339067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.339250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.339281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.339569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.339600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.339748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.339781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.340040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.340071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.340190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.340222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.340439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.340471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.340663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.340703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.340821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.340851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.341025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.341056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.341319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.341350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.341639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.508 [2024-11-27 05:50:26.341682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.508 qpair failed and we were unable to recover it. 00:28:38.508 [2024-11-27 05:50:26.341855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.341886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.342056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.342087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.342275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.342307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.342596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.342627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.342882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.342919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.343113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.343144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.343405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.343436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.343745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.343777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.344015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.344045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.344234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.344265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.344506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.344537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.344776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.344808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.344991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.345021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.345259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.345291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.345485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.345515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.345750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.345782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.345988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.346019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.346200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.346231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.346440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.346471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.346734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.346766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.347013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.347045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.347304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.347335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.347575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.347607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.347777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.347809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.348069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.348100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.348388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.348419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.348532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.348563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.348744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.348776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.348898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.348927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.349164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.349194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.349369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.349399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.349623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.349654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.349863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.349895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.350136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.350167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.350337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.509 [2024-11-27 05:50:26.350368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.509 qpair failed and we were unable to recover it. 00:28:38.509 [2024-11-27 05:50:26.350553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.350584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.350784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.350815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.351078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.351109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.351365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.351396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.351652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.351691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.351977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.352009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.352200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.352232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.352421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.352453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.352714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.352747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.352963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.353000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.353188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.353218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.353475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.353506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.353792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.353825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.353951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.353982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.354239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.354270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.354442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.354473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.354651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.354689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.354883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.354913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.355152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.355182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.355419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.355449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.355717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.355749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.355937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.355967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.356232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.356264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.356554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.356585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.356772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.356805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.356983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.357012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.357272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.357302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.357484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.357514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.357797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.357830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.358088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.358119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.358309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.358340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.358528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.358557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.358692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.358722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.358900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.358932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.359196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.359227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.359465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.510 [2024-11-27 05:50:26.359496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.510 qpair failed and we were unable to recover it. 00:28:38.510 [2024-11-27 05:50:26.359685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.359718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.359917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.359947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.360209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.360240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.360415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.360446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.360635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.360665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.360940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.360972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.361155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.361186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.361482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.361514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.361755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.361788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.361994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.362025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.362262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.362293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.362479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.362510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.362795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.362827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.363092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.363129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.363271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.363303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.363567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.363598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.363839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.363870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.364055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.364086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.364265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.364296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.364493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.364524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.364630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.364659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.364843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.364874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.365107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.365139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.365321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.365352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.365578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.365609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.365734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.365766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.366009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.366041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.366307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.366338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.366575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.366606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.366815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.366847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.367085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.367116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.367250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.367280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.367545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.367576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.367864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.367896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.368166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.368196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.368389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.368420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.368626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.368657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.368927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.511 [2024-11-27 05:50:26.368958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.511 qpair failed and we were unable to recover it. 00:28:38.511 [2024-11-27 05:50:26.369155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.369187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.369443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.369473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.369689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.369721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.369984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.370017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.370187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.370217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.370397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.370428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.370705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.370738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.370942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.370974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.371146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.371178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.371465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.371495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.371754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.371786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.372063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.372094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.372377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.372408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.372685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.372717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.373004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.373035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.373225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.373262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.373565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.373595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.373863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.373894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.374031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.374062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.374269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.374301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.374482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.374513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.374720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.374753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.375015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.375046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.375291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.375321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.512 [2024-11-27 05:50:26.375527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.375560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.375825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.375857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b9 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:38.512 0 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.376033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.376064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:38.512 [2024-11-27 05:50:26.376326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.376363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.512 [2024-11-27 05:50:26.376574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.376606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.376786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.376817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.512 [2024-11-27 05:50:26.377007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.377039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.377226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.377256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.377492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.377523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.377713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.377745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.377983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.378013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.378267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.378297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.512 qpair failed and we were unable to recover it. 00:28:38.512 [2024-11-27 05:50:26.378560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.512 [2024-11-27 05:50:26.378592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.378832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.378865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.378982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.379012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.379260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.379590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.379621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.379764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.379797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.379988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.380019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.380197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.380228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.380512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.380544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.380809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.380842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.381050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.381081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.381366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.381396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.381589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.381620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.381821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.381853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.382049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.382081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.382325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.382357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.382529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.382559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.382759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.382804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.383007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.383039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.383222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.383254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.383441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.383473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.383714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.383745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.384038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.384069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.384253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.384285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.384512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.384789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.384821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.385007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.385038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.385244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.385276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.385465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.385495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.385761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.385794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.386034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.386066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.386208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.386240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.386379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.386414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.386553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.386584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.386721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.386757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.386950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.386983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.387195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.387227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.387359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.513 [2024-11-27 05:50:26.387391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.513 qpair failed and we were unable to recover it. 00:28:38.513 [2024-11-27 05:50:26.387635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.387667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.387920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.387953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.388081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.388114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.388249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.388279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.388538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.388572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.388790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.388825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.389021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.389054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.389245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.389278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.389542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.389575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.389819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.389851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.389989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.390020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.390210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.390242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.390422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.390453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.390717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.390750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.390942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.390974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.391187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.391219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.391419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.391451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.391706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.391739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.391860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.391890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.392086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.392122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.392383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.392415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.392660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.392700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.392907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.392939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.393197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.393228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.393528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.393561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.393837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.393870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.394002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.394033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.394216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.394247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.394538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.394570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.394815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.394848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.395034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.395067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.395329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.395361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.395543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.395574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.395711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.514 [2024-11-27 05:50:26.395742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.514 qpair failed and we were unable to recover it. 00:28:38.514 [2024-11-27 05:50:26.395916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.395948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.396064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.396094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.396270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.396303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.396520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.396552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.396724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.396756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.397011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.397043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.397173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.397205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.397319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.397351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.397536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.397568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.397790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.397823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.398034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.398065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.398238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.398269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.398451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.398484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.398697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.398729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.398923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.398955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.399143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.399175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.399449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.399480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.399766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.399798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.399988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.400019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.400161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.400191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.400382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.400413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.400637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.400668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.400867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.400899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.401093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.401125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.401477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.401508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.401722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.401762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.401903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.401935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.402173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.402205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.402456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.402489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.402728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.402760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.402871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.402901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.403048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.403078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.403225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.403255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.403495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.403527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.403769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.403803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.403988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.404019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.404211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.404242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.404488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.515 [2024-11-27 05:50:26.404520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.515 qpair failed and we were unable to recover it. 00:28:38.515 [2024-11-27 05:50:26.404810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.404842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.404994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.405026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.405314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.405346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.405635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.405667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.405875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.405906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.406042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.406072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.406208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.406237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.406384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.406416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.406691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.406725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.406850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.406880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.407075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.407107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.407233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.407263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.407533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.407564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.407802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.407835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.408128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.408161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.408352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.408383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.408680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.408713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.408902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.408935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.409119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.409152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.409395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.409426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.409711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.409744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.410016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.410049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.410333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.410366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.410629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.410659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.410868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.410900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.411085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.411116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.411323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.411354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.411523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.411561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.411736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.411770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.411889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.411920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.412123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.412155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.412371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.412403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.516 [2024-11-27 05:50:26.412667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.412708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.412949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:38.516 [2024-11-27 05:50:26.412981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.413238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.413270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff208000b90 with addr=10.0.0.2, port=4420 00:28:38.516 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 [2024-11-27 05:50:26.413457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.413499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.516 qpair failed and we were unable to recover it. 00:28:38.516 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.516 [2024-11-27 05:50:26.413691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.516 [2024-11-27 05:50:26.413732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.413988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.414021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.414223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.414255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.414528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.414560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.414794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.414827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.415063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.415095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.415223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.415255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.415383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.415413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.415531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.415562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.415744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.415777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.416060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.416092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.416215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.416246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.416434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.416465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.416651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.416691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.416885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.416916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.417177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.417208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.417406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.417437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.417697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.417727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.417921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.417952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.418099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.418130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.418431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.418463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.418595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.418625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.418804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.418836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.418944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.418975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.419144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.419174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.419366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.419396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.419578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.419609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.419895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.419928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.420165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.420197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.420389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.420426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.420697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.420731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.420915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.420946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.421126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.421158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.421394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.421425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.421714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.421746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.421917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.421948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.422061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.422093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.422293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.422324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.517 qpair failed and we were unable to recover it. 00:28:38.517 [2024-11-27 05:50:26.422590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.517 [2024-11-27 05:50:26.422622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.422829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.422862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.423046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.423076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.423255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.423286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.423487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.423516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.423694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.423726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.423937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.423968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.424136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.424166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.424420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.424451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.424633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.424664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.424848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.424878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.425111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.425141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.425268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.425298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.425555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.425587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.425787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.425820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.425991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.426021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.426191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.426222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.426477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.426508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.426708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.426740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.426928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.426959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.427218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.427250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.427535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.427565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.427767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.427799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.428064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.428094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.428302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.428332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.428519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.428551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.428683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.428715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.428910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.428940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.429141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.429172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.429352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.429382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.429643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.429697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.429975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.430012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.430253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.430285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.430463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.430494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.430781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.430814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.431047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.431079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.431263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.431295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.431466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.518 [2024-11-27 05:50:26.431497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.518 qpair failed and we were unable to recover it. 00:28:38.518 [2024-11-27 05:50:26.431614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.431644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.431918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.431951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.432218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.432250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.432512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.432544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.432787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.432819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.433009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.433039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.433276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.433307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.433562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.433593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.433721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.433752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.433990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.434021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.434296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.434327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.434510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.434541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.434714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.434746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.435002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.435033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.435226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.435258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.435466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.435499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.435737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.435769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.436029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.436060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.436325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.436357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.436639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.436697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.436936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.436968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.437161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.437193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.437476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.437507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.437783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.437816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.438084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.438116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.438326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.438357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.438598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.438630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.438905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.438937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.439218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.439250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.439529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.439560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.439829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.439863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.440048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.440080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.440252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.440284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.519 [2024-11-27 05:50:26.440543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.519 [2024-11-27 05:50:26.440582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.519 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.440850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.440882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.441019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.441052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.441236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.441268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.441528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.441559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.441775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.441808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.442100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.442132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.442345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.442378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.442568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.442601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.442864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.442899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.443164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.443197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.443480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.443511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.443790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.443823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.443995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.444026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.444297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.444328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.444566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.444600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.444867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.444900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.445188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.445219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.445433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.445464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 Malloc0 00:28:38.520 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.520 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:38.520 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.520 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.520 [2024-11-27 05:50:26.447622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.447690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.447945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.447980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.448231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.448265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.448505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.448537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.448797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.448831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.449034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.449065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.449313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.449353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.449532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.449564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.449763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.449796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.450061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.450093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.450297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.450329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.450461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.450492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.450752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.450786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.450973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.451005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.451129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.451160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.451354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.451386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.451600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.451633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.451829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.520 [2024-11-27 05:50:26.451863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.520 qpair failed and we were unable to recover it. 00:28:38.520 [2024-11-27 05:50:26.452102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.452133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.452342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.452373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.452620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.452651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.452958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.452991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.453276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.453306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.453368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.521 [2024-11-27 05:50:26.453488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.453518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.453699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.453732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.453947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.453977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.454249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.454280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.454479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.454510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.454688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.454722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.455040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.455071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.455335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.455365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.455542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.455574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.455764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.455796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.455983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.456014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.456202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.456233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.456447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.456479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.456722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.456756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.456965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.457000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.457257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.457290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.457484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.457517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.457824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.457857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.457981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.458010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.458245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.458275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.458483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.458512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.458644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.458682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.458784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.458815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.459101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.459132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.459338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.459369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.459553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.459585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.459846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.459877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.460015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.460044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.460298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.460328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.460514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.460544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.460779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.460810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.460934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.460967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.521 [2024-11-27 05:50:26.461148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.521 [2024-11-27 05:50:26.461180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.521 qpair failed and we were unable to recover it. 00:28:38.522 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.522 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.522 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.522 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.522 [2024-11-27 05:50:26.463203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.463254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.463501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.463545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.463760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.463793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.463925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.463955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.464086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.464116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.464245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.464274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.464510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.464542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.464665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.464707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.464967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.464999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.465189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.465223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.465484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.465516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.465709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.465980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.466013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.466256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.466287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.466542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.466573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.466773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.466806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.466997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.467029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.467167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.467198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.467312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.467342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.467575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.467607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.467817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.467848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.468038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.468069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.468218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.468249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.468437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.468467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.468714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.468746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.468931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.468962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.469199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.469231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.522 [2024-11-27 05:50:26.469369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.522 [2024-11-27 05:50:26.469401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff210000b90 with addr=10.0.0.2, port=4420 00:28:38.522 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.469650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.469735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff204000b90 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.469926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.469988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.784 [2024-11-27 05:50:26.470187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.470221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:38.784 [2024-11-27 05:50:26.470415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.470448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.470630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.470661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.784 [2024-11-27 05:50:26.470865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.470898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.784 [2024-11-27 05:50:26.471106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.471138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.471324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.471355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.471620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.471652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.471842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.471874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.472109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.472141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.472259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.472288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.472466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.472497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.472801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.472835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.473017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.473047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.473308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.473338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.473519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.473552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.473691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.473723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.473916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.473948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.474144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.474176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.474469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.474500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.474790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.474822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.475015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.475046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.475238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.475269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.475453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.475485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.475665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.475712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.475907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.475939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.476180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.476210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.476553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.476584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.476845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.476878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.784 [2024-11-27 05:50:26.477066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.784 [2024-11-27 05:50:26.477097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.784 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.477220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.477251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.477439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.477470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.477734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.477767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.477960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.785 [2024-11-27 05:50:26.477992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.478249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.478281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.785 [2024-11-27 05:50:26.478455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.478487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.785 [2024-11-27 05:50:26.478653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.478700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.478883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.478914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.785 [2024-11-27 05:50:26.479085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.479117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.479407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.479438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.479695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.479727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.479917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.479949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.480214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.480246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.480491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.480522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.480756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.480788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.481030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.481060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.481323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.785 [2024-11-27 05:50:26.481355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26be0 with addr=10.0.0.2, port=4420 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.481598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.785 [2024-11-27 05:50:26.484017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.785 [2024-11-27 05:50:26.484122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.785 [2024-11-27 05:50:26.484165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.785 [2024-11-27 05:50:26.484188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.785 [2024-11-27 05:50:26.484217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.785 [2024-11-27 05:50:26.484268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.785 [2024-11-27 05:50:26.493969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.785 [2024-11-27 05:50:26.494071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.785 [2024-11-27 05:50:26.494110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.785 [2024-11-27 05:50:26.494132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.785 [2024-11-27 05:50:26.494153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.785 [2024-11-27 05:50:26.494197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 05:50:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1923204 00:28:38.785 [2024-11-27 05:50:26.503941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.785 [2024-11-27 05:50:26.504014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.785 [2024-11-27 05:50:26.504043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.785 [2024-11-27 05:50:26.504057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.785 [2024-11-27 05:50:26.504070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.785 [2024-11-27 05:50:26.504100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.513968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.785 [2024-11-27 05:50:26.514034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.785 [2024-11-27 05:50:26.514053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.785 [2024-11-27 05:50:26.514063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.785 [2024-11-27 05:50:26.514071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.785 [2024-11-27 05:50:26.514092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.523925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.785 [2024-11-27 05:50:26.523989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.785 [2024-11-27 05:50:26.524004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.785 [2024-11-27 05:50:26.524011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.785 [2024-11-27 05:50:26.524017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.785 [2024-11-27 05:50:26.524032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.785 qpair failed and we were unable to recover it. 00:28:38.785 [2024-11-27 05:50:26.533960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.785 [2024-11-27 05:50:26.534015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.785 [2024-11-27 05:50:26.534031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.785 [2024-11-27 05:50:26.534038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.534044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.534057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.543972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.544023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.544037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.544043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.544049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.544064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.554023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.554118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.554132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.554139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.554144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.554159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.564056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.564111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.564125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.564136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.564142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.564157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.574079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.574133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.574147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.574153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.574159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.574173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.584099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.584154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.584168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.584175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.584181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.584196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.594118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.594176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.594190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.594196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.594202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.594216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.604144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.604198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.604212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.604219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.604226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.604245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.614162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.614217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.614230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.614237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.614243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.614257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.624194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.624248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.624264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.624272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.624277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.624292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.634247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.634305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.634319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.634326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.634331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.634346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.644260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.644312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.644325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.644331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.644337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.644351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.654263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.654353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.654367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.654373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.654379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.786 [2024-11-27 05:50:26.654393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.786 qpair failed and we were unable to recover it. 00:28:38.786 [2024-11-27 05:50:26.664306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.786 [2024-11-27 05:50:26.664359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.786 [2024-11-27 05:50:26.664373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.786 [2024-11-27 05:50:26.664380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.786 [2024-11-27 05:50:26.664386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.664399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.674346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.674399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.674412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.674419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.674425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.674439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.684361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.684418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.684431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.684438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.684444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.684458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.694402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.694458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.694472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.694482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.694488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.694503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.704422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.704479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.704493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.704500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.704506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.704521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.714512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.714615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.714629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.714636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.714642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.714655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.724414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.724468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.724481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.724487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.724493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.724507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.734561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.734612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.734625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.734631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.734637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.734655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.744566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.744620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.744633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.744640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.744646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.744659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.754579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.754635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.754649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.754655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.754661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.754681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.764595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.764701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.764714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.764721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.764726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.764741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:38.787 [2024-11-27 05:50:26.774628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.787 [2024-11-27 05:50:26.774700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.787 [2024-11-27 05:50:26.774714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.787 [2024-11-27 05:50:26.774720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.787 [2024-11-27 05:50:26.774726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:38.787 [2024-11-27 05:50:26.774740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:38.787 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.784654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.784717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.784731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.047 [2024-11-27 05:50:26.784737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.047 [2024-11-27 05:50:26.784743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.047 [2024-11-27 05:50:26.784757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.047 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.794709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.794784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.794799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.047 [2024-11-27 05:50:26.794805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.047 [2024-11-27 05:50:26.794811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.047 [2024-11-27 05:50:26.794825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.047 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.804729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.804786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.804799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.047 [2024-11-27 05:50:26.804805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.047 [2024-11-27 05:50:26.804812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.047 [2024-11-27 05:50:26.804826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.047 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.814686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.814741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.814755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.047 [2024-11-27 05:50:26.814762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.047 [2024-11-27 05:50:26.814767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.047 [2024-11-27 05:50:26.814782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.047 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.824777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.824843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.824857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.047 [2024-11-27 05:50:26.824869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.047 [2024-11-27 05:50:26.824875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.047 [2024-11-27 05:50:26.824890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.047 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.834817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.834874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.834890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.047 [2024-11-27 05:50:26.834897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.047 [2024-11-27 05:50:26.834903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.047 [2024-11-27 05:50:26.834918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.047 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.844856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.844920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.844934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.047 [2024-11-27 05:50:26.844940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.047 [2024-11-27 05:50:26.844946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.047 [2024-11-27 05:50:26.844960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.047 qpair failed and we were unable to recover it. 00:28:39.047 [2024-11-27 05:50:26.854872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.047 [2024-11-27 05:50:26.854927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.047 [2024-11-27 05:50:26.854940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.854946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.854952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.854965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.864900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.864959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.864972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.864978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.864984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.865002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.874942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.874998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.875011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.875017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.875023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.875037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.884963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.885012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.885025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.885032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.885037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.885050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.894989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.895042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.895055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.895061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.895067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.895081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.905017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.905070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.905082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.905089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.905095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.905108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.914988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.915052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.915066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.915072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.915078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.915092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.925082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.925137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.925151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.925158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.925164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.925178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.935100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.935155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.935168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.935175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.935180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.935194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.945136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.945189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.945204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.945211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.945217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.945230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.955198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.955253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.955269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.955280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.955286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.955301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.965121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.965179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.965192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.965199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.965206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.965220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.975222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.975278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.975291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.048 [2024-11-27 05:50:26.975298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.048 [2024-11-27 05:50:26.975303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.048 [2024-11-27 05:50:26.975317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.048 qpair failed and we were unable to recover it. 00:28:39.048 [2024-11-27 05:50:26.985244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.048 [2024-11-27 05:50:26.985299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.048 [2024-11-27 05:50:26.985313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.049 [2024-11-27 05:50:26.985319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.049 [2024-11-27 05:50:26.985325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.049 [2024-11-27 05:50:26.985338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.049 qpair failed and we were unable to recover it. 00:28:39.049 [2024-11-27 05:50:26.995265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.049 [2024-11-27 05:50:26.995328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.049 [2024-11-27 05:50:26.995342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.049 [2024-11-27 05:50:26.995349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.049 [2024-11-27 05:50:26.995355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.049 [2024-11-27 05:50:26.995372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.049 qpair failed and we were unable to recover it. 00:28:39.049 [2024-11-27 05:50:27.005241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.049 [2024-11-27 05:50:27.005297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.049 [2024-11-27 05:50:27.005310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.049 [2024-11-27 05:50:27.005317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.049 [2024-11-27 05:50:27.005323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.049 [2024-11-27 05:50:27.005336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.049 qpair failed and we were unable to recover it. 00:28:39.049 [2024-11-27 05:50:27.015260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.049 [2024-11-27 05:50:27.015312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.049 [2024-11-27 05:50:27.015325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.049 [2024-11-27 05:50:27.015331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.049 [2024-11-27 05:50:27.015337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.049 [2024-11-27 05:50:27.015350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.049 qpair failed and we were unable to recover it. 00:28:39.049 [2024-11-27 05:50:27.025282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.049 [2024-11-27 05:50:27.025342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.049 [2024-11-27 05:50:27.025356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.049 [2024-11-27 05:50:27.025362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.049 [2024-11-27 05:50:27.025368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.049 [2024-11-27 05:50:27.025383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.049 qpair failed and we were unable to recover it. 00:28:39.049 [2024-11-27 05:50:27.035439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.049 [2024-11-27 05:50:27.035502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.049 [2024-11-27 05:50:27.035515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.049 [2024-11-27 05:50:27.035522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.049 [2024-11-27 05:50:27.035528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.049 [2024-11-27 05:50:27.035542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.049 qpair failed and we were unable to recover it. 00:28:39.049 [2024-11-27 05:50:27.045436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.049 [2024-11-27 05:50:27.045497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.049 [2024-11-27 05:50:27.045512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.049 [2024-11-27 05:50:27.045518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.049 [2024-11-27 05:50:27.045524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.049 [2024-11-27 05:50:27.045538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.049 qpair failed and we were unable to recover it. 00:28:39.310 [2024-11-27 05:50:27.055377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.310 [2024-11-27 05:50:27.055433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.310 [2024-11-27 05:50:27.055447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.310 [2024-11-27 05:50:27.055453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.310 [2024-11-27 05:50:27.055459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.310 [2024-11-27 05:50:27.055473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.310 [2024-11-27 05:50:27.065506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.310 [2024-11-27 05:50:27.065587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.310 [2024-11-27 05:50:27.065600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.310 [2024-11-27 05:50:27.065607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.310 [2024-11-27 05:50:27.065612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.310 [2024-11-27 05:50:27.065626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.310 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.075439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.075493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.075507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.075514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.075520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.075534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.085573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.085649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.085662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.085678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.085684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.085698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.095609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.095672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.095687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.095693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.095699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.095713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.105585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.105641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.105654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.105661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.105667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.105687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.115688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.115792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.115806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.115812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.115818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.115833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.125582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.125635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.125649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.125655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.125661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.125683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.135664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.135725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.135739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.135746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.135751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.135765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.145714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.145781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.145795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.145801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.145807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.145821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.155754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.155832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.155845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.155852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.155858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.155872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.165701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.165754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.165767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.165774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.165780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.165794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.175805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.175866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.175879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.175885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.175891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.175906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.185790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.185888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.185901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.185907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.185913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.311 [2024-11-27 05:50:27.185927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.311 qpair failed and we were unable to recover it. 00:28:39.311 [2024-11-27 05:50:27.195932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.311 [2024-11-27 05:50:27.195989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.311 [2024-11-27 05:50:27.196002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.311 [2024-11-27 05:50:27.196008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.311 [2024-11-27 05:50:27.196014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.196028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.205843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.205927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.205940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.205946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.205952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.205965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.215909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.215996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.216009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.216019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.216025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.216039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.225940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.226019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.226033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.226040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.226045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.226060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.235974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.236027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.236040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.236047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.236053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.236067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.245977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.246036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.246051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.246058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.246064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.246078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.255957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.256012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.256026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.256033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.256039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.256056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.265968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.266033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.266045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.266052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.266058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.266072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.276005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.276065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.276077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.276084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.276090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.276103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.286157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.286223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.286236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.286243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.286249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.286263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.296166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.296218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.296231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.296238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.296244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.296258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.312 [2024-11-27 05:50:27.306189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.312 [2024-11-27 05:50:27.306244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.312 [2024-11-27 05:50:27.306259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.312 [2024-11-27 05:50:27.306265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.312 [2024-11-27 05:50:27.306271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.312 [2024-11-27 05:50:27.306285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.312 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.316233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.316339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.316354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.316360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.316366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.316381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.326179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.326237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.326250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.326256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.326262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.326277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.336246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.336297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.336310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.336316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.336322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.336336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.346212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.346294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.346308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.346317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.346323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.346337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.356227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.356282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.356296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.356302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.356308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.356323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.366371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.366426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.366439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.366446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.366452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.366465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.376343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.376436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.376448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.376454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.376460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.376474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.386381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.386433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.386446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.386452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.386458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.386475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.574 [2024-11-27 05:50:27.396425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.574 [2024-11-27 05:50:27.396481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.574 [2024-11-27 05:50:27.396494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.574 [2024-11-27 05:50:27.396500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.574 [2024-11-27 05:50:27.396506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.574 [2024-11-27 05:50:27.396519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.574 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.406450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.406506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.406518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.406525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.406530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.406544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.416477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.416545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.416558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.416565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.416570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.416584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.426503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.426554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.426568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.426574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.426580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.426594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.436546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.436602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.436615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.436622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.436627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.436642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.446570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.446619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.446633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.446639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.446645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.446659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.456603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.456657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.456676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.456683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.456688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.456704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.466626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.466694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.466708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.466715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.466720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.466735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.476660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.476723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.476740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.476747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.476752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.476766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.486739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.486798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.486811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.486818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.486823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.486838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.496708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.496762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.575 [2024-11-27 05:50:27.496775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.575 [2024-11-27 05:50:27.496782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.575 [2024-11-27 05:50:27.496788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.575 [2024-11-27 05:50:27.496802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.575 qpair failed and we were unable to recover it. 00:28:39.575 [2024-11-27 05:50:27.506725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.575 [2024-11-27 05:50:27.506775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-11-27 05:50:27.506788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-11-27 05:50:27.506795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-11-27 05:50:27.506801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.576 [2024-11-27 05:50:27.506815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-11-27 05:50:27.516761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-11-27 05:50:27.516832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-11-27 05:50:27.516846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-11-27 05:50:27.516852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-11-27 05:50:27.516858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.576 [2024-11-27 05:50:27.516875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-11-27 05:50:27.526793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-11-27 05:50:27.526847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-11-27 05:50:27.526860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-11-27 05:50:27.526867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-11-27 05:50:27.526873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.576 [2024-11-27 05:50:27.526887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-11-27 05:50:27.536818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-11-27 05:50:27.536869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-11-27 05:50:27.536882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-11-27 05:50:27.536889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-11-27 05:50:27.536894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.576 [2024-11-27 05:50:27.536909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-11-27 05:50:27.546848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-11-27 05:50:27.546909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-11-27 05:50:27.546923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-11-27 05:50:27.546929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-11-27 05:50:27.546935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.576 [2024-11-27 05:50:27.546949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-11-27 05:50:27.556902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-11-27 05:50:27.556960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-11-27 05:50:27.556973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-11-27 05:50:27.556980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-11-27 05:50:27.556985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.576 [2024-11-27 05:50:27.556999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.576 [2024-11-27 05:50:27.566912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.576 [2024-11-27 05:50:27.567007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.576 [2024-11-27 05:50:27.567021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.576 [2024-11-27 05:50:27.567027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.576 [2024-11-27 05:50:27.567032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.576 [2024-11-27 05:50:27.567047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.576 qpair failed and we were unable to recover it. 00:28:39.837 [2024-11-27 05:50:27.576858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-11-27 05:50:27.576913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-11-27 05:50:27.576926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-11-27 05:50:27.576932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-11-27 05:50:27.576938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.837 [2024-11-27 05:50:27.576951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-11-27 05:50:27.587001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-11-27 05:50:27.587106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-11-27 05:50:27.587121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-11-27 05:50:27.587127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-11-27 05:50:27.587133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.837 [2024-11-27 05:50:27.587148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-11-27 05:50:27.597042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-11-27 05:50:27.597110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-11-27 05:50:27.597124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-11-27 05:50:27.597130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-11-27 05:50:27.597136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.837 [2024-11-27 05:50:27.597151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-11-27 05:50:27.607020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.837 [2024-11-27 05:50:27.607075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.837 [2024-11-27 05:50:27.607092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.837 [2024-11-27 05:50:27.607099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.837 [2024-11-27 05:50:27.607104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.837 [2024-11-27 05:50:27.607119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.837 qpair failed and we were unable to recover it. 00:28:39.837 [2024-11-27 05:50:27.617044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.617100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.617114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.617121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.617127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.617141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.627072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.627124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.627138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.627144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.627150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.627165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.637120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.637176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.637190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.637196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.637202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.637217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.647171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.647248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.647262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.647268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.647273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.647295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.657160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.657210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.657223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.657229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.657235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.657249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.667188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.667294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.667310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.667316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.667322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.667337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.677243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.677300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.677313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.677320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.677326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.677339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.687255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.687341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.687354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.687360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.687366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.687380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.697269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.697323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.697337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.697344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.697350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.697363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.707315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.707372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.707386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.707393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.707399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.707413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.717341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.717393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.717406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.717413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.717419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.717433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.727397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.727460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.727473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.727480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.727486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.727500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.737395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.737450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.838 [2024-11-27 05:50:27.737467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.838 [2024-11-27 05:50:27.737474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.838 [2024-11-27 05:50:27.737479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.838 [2024-11-27 05:50:27.737494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.838 qpair failed and we were unable to recover it. 00:28:39.838 [2024-11-27 05:50:27.747424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.838 [2024-11-27 05:50:27.747475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.747489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.747495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.747501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.747515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.757464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.757516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.757529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.757536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.757542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.757556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.767484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.767546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.767560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.767566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.767573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.767586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.777516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.777567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.777581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.777588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.777594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.777612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.787535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.787589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.787601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.787608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.787614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.787628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.797584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.797657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.797675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.797682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.797689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.797704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.807603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.807655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.807672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.807679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.807685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.807699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.817627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.817687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.817700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.817707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.817712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.817727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.827658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.827715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.827729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.827736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.827742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.827755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:39.839 [2024-11-27 05:50:27.837710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:39.839 [2024-11-27 05:50:27.837812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:39.839 [2024-11-27 05:50:27.837826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:39.839 [2024-11-27 05:50:27.837833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:39.839 [2024-11-27 05:50:27.837838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:39.839 [2024-11-27 05:50:27.837852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:39.839 qpair failed and we were unable to recover it. 00:28:40.100 [2024-11-27 05:50:27.847739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.100 [2024-11-27 05:50:27.847802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.100 [2024-11-27 05:50:27.847816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.100 [2024-11-27 05:50:27.847823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.100 [2024-11-27 05:50:27.847828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.100 [2024-11-27 05:50:27.847843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.100 qpair failed and we were unable to recover it. 00:28:40.100 [2024-11-27 05:50:27.857760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.100 [2024-11-27 05:50:27.857814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.100 [2024-11-27 05:50:27.857828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.100 [2024-11-27 05:50:27.857835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.100 [2024-11-27 05:50:27.857841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.100 [2024-11-27 05:50:27.857855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.100 qpair failed and we were unable to recover it. 00:28:40.100 [2024-11-27 05:50:27.867782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.100 [2024-11-27 05:50:27.867835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.100 [2024-11-27 05:50:27.867852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.100 [2024-11-27 05:50:27.867859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.100 [2024-11-27 05:50:27.867865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.100 [2024-11-27 05:50:27.867879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.100 qpair failed and we were unable to recover it. 00:28:40.100 [2024-11-27 05:50:27.877809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.100 [2024-11-27 05:50:27.877863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.100 [2024-11-27 05:50:27.877877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.100 [2024-11-27 05:50:27.877883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.100 [2024-11-27 05:50:27.877889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.100 [2024-11-27 05:50:27.877903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.100 qpair failed and we were unable to recover it. 00:28:40.100 [2024-11-27 05:50:27.887842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.100 [2024-11-27 05:50:27.887898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.100 [2024-11-27 05:50:27.887911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.887918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.887924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.887938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.897867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.897917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.897930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.897936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.897942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.897956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.907885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.907960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.907973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.907979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.907988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.908001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.917930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.917985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.917998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.918004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.918010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.918024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.927948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.928016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.928029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.928035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.928041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.928055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.937986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.938039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.938053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.938059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.938065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.938079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.947978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.948038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.948051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.948058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.948064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.948078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.958017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.958072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.958088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.958095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.958101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.958116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.968090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.968145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.968158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.968165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.968171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.968184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.978091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.978142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.978156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.978163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.978168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.978182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.988093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.988152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.988166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.988172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.988178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.988192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:27.998163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:27.998233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:27.998251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:27.998257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:27.998263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:27.998277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.101 qpair failed and we were unable to recover it. 00:28:40.101 [2024-11-27 05:50:28.008179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.101 [2024-11-27 05:50:28.008238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.101 [2024-11-27 05:50:28.008252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.101 [2024-11-27 05:50:28.008258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.101 [2024-11-27 05:50:28.008264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.101 [2024-11-27 05:50:28.008278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.018215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.018268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.018282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.018288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.018294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.018308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.028247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.028300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.028313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.028320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.028326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.028339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.038256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.038312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.038326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.038332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.038341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.038356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.048270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.048324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.048337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.048344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.048350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.048364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.058331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.058384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.058396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.058403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.058408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.058423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.068340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.068393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.068406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.068412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.068418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.068432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.078403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.078456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.078471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.078477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.078483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.078497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.088410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.088466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.088480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.088486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.088492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.088505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.102 [2024-11-27 05:50:28.098426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.102 [2024-11-27 05:50:28.098491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.102 [2024-11-27 05:50:28.098505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.102 [2024-11-27 05:50:28.098512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.102 [2024-11-27 05:50:28.098518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.102 [2024-11-27 05:50:28.098532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.102 qpair failed and we were unable to recover it. 00:28:40.363 [2024-11-27 05:50:28.108455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.363 [2024-11-27 05:50:28.108505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.363 [2024-11-27 05:50:28.108519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.363 [2024-11-27 05:50:28.108526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.363 [2024-11-27 05:50:28.108532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.363 [2024-11-27 05:50:28.108546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.118542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.118644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.118657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.118664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.118673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.118688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.128514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.128570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.128586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.128593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.128599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.128613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.138540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.138590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.138604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.138610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.138616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.138631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.148545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.148597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.148610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.148616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.148622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.148635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.158599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.158653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.158666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.158677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.158682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.158697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.168631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.168692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.168705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.168712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.168721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.168736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.178643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.178701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.178715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.178721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.178727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.178741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.188665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.188719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.188734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.188741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.364 [2024-11-27 05:50:28.188747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.364 [2024-11-27 05:50:28.188761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.364 qpair failed and we were unable to recover it. 00:28:40.364 [2024-11-27 05:50:28.198721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.364 [2024-11-27 05:50:28.198780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.364 [2024-11-27 05:50:28.198794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.364 [2024-11-27 05:50:28.198801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.198807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.198822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.208732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.208787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.208801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.208808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.208814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.208828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.218768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.218820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.218834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.218840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.218846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.218860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.228784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.228834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.228847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.228854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.228860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.228874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.238819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.238874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.238888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.238894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.238900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.238914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.248767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.248833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.248846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.248853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.248858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.248872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.258910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.258978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.258995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.259001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.259007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.259020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.268925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.268978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.268991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.268998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.269004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.269017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.279003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.279099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.365 [2024-11-27 05:50:28.279112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.365 [2024-11-27 05:50:28.279119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.365 [2024-11-27 05:50:28.279125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.365 [2024-11-27 05:50:28.279139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.365 qpair failed and we were unable to recover it. 00:28:40.365 [2024-11-27 05:50:28.288958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.365 [2024-11-27 05:50:28.289042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.289058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.289064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.289070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.289084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.366 [2024-11-27 05:50:28.299033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.366 [2024-11-27 05:50:28.299085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.299098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.299104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.299117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.299132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.366 [2024-11-27 05:50:28.309066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.366 [2024-11-27 05:50:28.309129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.309142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.309149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.309155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.309169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.366 [2024-11-27 05:50:28.319031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.366 [2024-11-27 05:50:28.319084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.319099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.319105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.319111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.319125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.366 [2024-11-27 05:50:28.329079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.366 [2024-11-27 05:50:28.329136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.329151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.329158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.329163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.329177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.366 [2024-11-27 05:50:28.339106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.366 [2024-11-27 05:50:28.339161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.339175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.339182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.339187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.339201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.366 [2024-11-27 05:50:28.349145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.366 [2024-11-27 05:50:28.349200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.349214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.349220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.349226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.349240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.366 [2024-11-27 05:50:28.359171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.366 [2024-11-27 05:50:28.359222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.366 [2024-11-27 05:50:28.359235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.366 [2024-11-27 05:50:28.359242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.366 [2024-11-27 05:50:28.359248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.366 [2024-11-27 05:50:28.359262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.366 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.369192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.369250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.369263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.369270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.369276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.369290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.379216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.379277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.379291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.379297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.379303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.379317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.389261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.389313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.389330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.389337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.389343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.389356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.399284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.399337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.399350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.399357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.399362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.399377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.409314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.409370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.409384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.409390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.409396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.409410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.419325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.419374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.419387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.419393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.419399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.419412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.429392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.429443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.429456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.429462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.429471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.429485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.439421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.439482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.439496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.439502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.439508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.439522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.449421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.627 [2024-11-27 05:50:28.449472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.627 [2024-11-27 05:50:28.449485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.627 [2024-11-27 05:50:28.449492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.627 [2024-11-27 05:50:28.449498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.627 [2024-11-27 05:50:28.449512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.627 qpair failed and we were unable to recover it. 00:28:40.627 [2024-11-27 05:50:28.459460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.459525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.459539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.459545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.459551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.459566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.469448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.469502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.469515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.469522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.469527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.469541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.479511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.479564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.479577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.479583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.479589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.479604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.489461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.489516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.489529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.489536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.489542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.489556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.499576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.499629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.499643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.499650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.499656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.499673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.509543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.509595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.509609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.509617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.509624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.509638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.519621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.519680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.519698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.519704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.519710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.519724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.529667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.529732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.529746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.529752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.529758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.529772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.539667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.539729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.539743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.539749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.539755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.539769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.549695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.549744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.549757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.549764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.549770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.549784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.559732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.559791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.559804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.559810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.559819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.628 [2024-11-27 05:50:28.559833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.628 qpair failed and we were unable to recover it. 00:28:40.628 [2024-11-27 05:50:28.569762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.628 [2024-11-27 05:50:28.569820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.628 [2024-11-27 05:50:28.569834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.628 [2024-11-27 05:50:28.569840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.628 [2024-11-27 05:50:28.569846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.629 [2024-11-27 05:50:28.569860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.629 qpair failed and we were unable to recover it. 00:28:40.629 [2024-11-27 05:50:28.579807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.629 [2024-11-27 05:50:28.579891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.629 [2024-11-27 05:50:28.579904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.629 [2024-11-27 05:50:28.579910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.629 [2024-11-27 05:50:28.579916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.629 [2024-11-27 05:50:28.579930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.629 qpair failed and we were unable to recover it. 00:28:40.629 [2024-11-27 05:50:28.589772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.629 [2024-11-27 05:50:28.589830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.629 [2024-11-27 05:50:28.589844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.629 [2024-11-27 05:50:28.589850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.629 [2024-11-27 05:50:28.589856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.629 [2024-11-27 05:50:28.589870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.629 qpair failed and we were unable to recover it. 00:28:40.629 [2024-11-27 05:50:28.599863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.629 [2024-11-27 05:50:28.599916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.629 [2024-11-27 05:50:28.599930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.629 [2024-11-27 05:50:28.599936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.629 [2024-11-27 05:50:28.599942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.629 [2024-11-27 05:50:28.599957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.629 qpair failed and we were unable to recover it. 00:28:40.629 [2024-11-27 05:50:28.609910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.629 [2024-11-27 05:50:28.609963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.629 [2024-11-27 05:50:28.609976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.629 [2024-11-27 05:50:28.609982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.629 [2024-11-27 05:50:28.609988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.629 [2024-11-27 05:50:28.610001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.629 qpair failed and we were unable to recover it. 00:28:40.629 [2024-11-27 05:50:28.619925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.629 [2024-11-27 05:50:28.619987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.629 [2024-11-27 05:50:28.620000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.629 [2024-11-27 05:50:28.620006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.629 [2024-11-27 05:50:28.620013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.629 [2024-11-27 05:50:28.620027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.629 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.629896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.629971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.629986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.629993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.629999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.630013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.640002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.640059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.640073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.640079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.640085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.640100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.650046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.650104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.650121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.650127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.650133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.650147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.659999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.660054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.660068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.660074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.660080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.660095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.669971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.670027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.670040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.670046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.670052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.670067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.680136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.680190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.680203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.680209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.680215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.680229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.690140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.690204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.690217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.690224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.690233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.690247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.700200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.700247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.700262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.700268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.700274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.700289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.710114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.710168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.890 [2024-11-27 05:50:28.710181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.890 [2024-11-27 05:50:28.710187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.890 [2024-11-27 05:50:28.710193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.890 [2024-11-27 05:50:28.710207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.890 qpair failed and we were unable to recover it. 00:28:40.890 [2024-11-27 05:50:28.720185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.890 [2024-11-27 05:50:28.720239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.720252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.720258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.720265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.720279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.730225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.730283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.730296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.730303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.730309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.730323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.740178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.740233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.740246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.740253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.740259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.740273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.750335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.750425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.750439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.750445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.750451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.750464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.760311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.760377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.760391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.760397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.760403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.760417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.770423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.770486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.770499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.770506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.770511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.770526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.780312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.780365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.780382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.780389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.780394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.780408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.790307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.790363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.790376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.790383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.790389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.790402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.800384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.800441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.800455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.800461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.800467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.800480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.810444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.810503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.810516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.810523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.810528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.810542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.820476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.820530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.820544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.820550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.820559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.820574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.830428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.830479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.830492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.830498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.891 [2024-11-27 05:50:28.830504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.891 [2024-11-27 05:50:28.830519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.891 qpair failed and we were unable to recover it. 00:28:40.891 [2024-11-27 05:50:28.840479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.891 [2024-11-27 05:50:28.840562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.891 [2024-11-27 05:50:28.840576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.891 [2024-11-27 05:50:28.840582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.892 [2024-11-27 05:50:28.840588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.892 [2024-11-27 05:50:28.840603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.892 qpair failed and we were unable to recover it. 00:28:40.892 [2024-11-27 05:50:28.850593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.892 [2024-11-27 05:50:28.850698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.892 [2024-11-27 05:50:28.850713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.892 [2024-11-27 05:50:28.850720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.892 [2024-11-27 05:50:28.850725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.892 [2024-11-27 05:50:28.850739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.892 qpair failed and we were unable to recover it. 00:28:40.892 [2024-11-27 05:50:28.860545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.892 [2024-11-27 05:50:28.860597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.892 [2024-11-27 05:50:28.860611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.892 [2024-11-27 05:50:28.860618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.892 [2024-11-27 05:50:28.860625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.892 [2024-11-27 05:50:28.860639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.892 qpair failed and we were unable to recover it. 00:28:40.892 [2024-11-27 05:50:28.870575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.892 [2024-11-27 05:50:28.870658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.892 [2024-11-27 05:50:28.870675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.892 [2024-11-27 05:50:28.870682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.892 [2024-11-27 05:50:28.870687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.892 [2024-11-27 05:50:28.870701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.892 qpair failed and we were unable to recover it. 00:28:40.892 [2024-11-27 05:50:28.880646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.892 [2024-11-27 05:50:28.880708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.892 [2024-11-27 05:50:28.880723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.892 [2024-11-27 05:50:28.880729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.892 [2024-11-27 05:50:28.880735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.892 [2024-11-27 05:50:28.880749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.892 qpair failed and we were unable to recover it. 00:28:40.892 [2024-11-27 05:50:28.890604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:40.892 [2024-11-27 05:50:28.890662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:40.892 [2024-11-27 05:50:28.890680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:40.892 [2024-11-27 05:50:28.890687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:40.892 [2024-11-27 05:50:28.890693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:40.892 [2024-11-27 05:50:28.890707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:40.892 qpair failed and we were unable to recover it. 00:28:41.152 [2024-11-27 05:50:28.900698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.152 [2024-11-27 05:50:28.900753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.152 [2024-11-27 05:50:28.900767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.152 [2024-11-27 05:50:28.900773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.152 [2024-11-27 05:50:28.900780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.152 [2024-11-27 05:50:28.900794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.152 qpair failed and we were unable to recover it. 00:28:41.152 [2024-11-27 05:50:28.910726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.152 [2024-11-27 05:50:28.910778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.152 [2024-11-27 05:50:28.910798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.152 [2024-11-27 05:50:28.910805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.152 [2024-11-27 05:50:28.910810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.152 [2024-11-27 05:50:28.910825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.152 qpair failed and we were unable to recover it. 00:28:41.152 [2024-11-27 05:50:28.920766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.152 [2024-11-27 05:50:28.920821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.152 [2024-11-27 05:50:28.920834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.152 [2024-11-27 05:50:28.920841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.152 [2024-11-27 05:50:28.920846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.152 [2024-11-27 05:50:28.920861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.152 qpair failed and we were unable to recover it. 00:28:41.152 [2024-11-27 05:50:28.930769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.152 [2024-11-27 05:50:28.930840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.152 [2024-11-27 05:50:28.930853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.152 [2024-11-27 05:50:28.930860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.152 [2024-11-27 05:50:28.930866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.152 [2024-11-27 05:50:28.930880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.152 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:28.940858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:28.940915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:28.940929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:28.940935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:28.940941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:28.940955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:28.950856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:28.950911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:28.950925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:28.950931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:28.950944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:28.950958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:28.960885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:28.960943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:28.960959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:28.960967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:28.960973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:28.960988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:28.970833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:28.970885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:28.970898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:28.970904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:28.970911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:28.970925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:28.980954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:28.981008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:28.981022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:28.981029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:28.981035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:28.981049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:28.990975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:28.991027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:28.991040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:28.991047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:28.991052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:28.991066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:29.001049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:29.001152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:29.001165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:29.001171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:29.001177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:29.001191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:29.011042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:29.011099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:29.011112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:29.011119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:29.011125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:29.011139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:29.021082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:29.021140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:29.021154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:29.021160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:29.021166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:29.021180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:29.031028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:29.031081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:29.031094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:29.031100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:29.031106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:29.031120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:29.041099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:29.041151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:29.041167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:29.041174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:29.041179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:29.041194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.153 [2024-11-27 05:50:29.051131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.153 [2024-11-27 05:50:29.051186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.153 [2024-11-27 05:50:29.051199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.153 [2024-11-27 05:50:29.051206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.153 [2024-11-27 05:50:29.051211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.153 [2024-11-27 05:50:29.051225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.153 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.061156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.061205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.061219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.061225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.061230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.061245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.071177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.071225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.071238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.071244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.071250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.071264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.081193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.081248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.081261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.081267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.081276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.081290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.091283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.091343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.091356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.091362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.091368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.091381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.101284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.101338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.101351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.101357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.101363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.101376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.111292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.111345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.111359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.111365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.111371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.111385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.121344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.121396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.121411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.121417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.121423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.121438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.131359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.131423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.131436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.131443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.131448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.131463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.141389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.141442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.141456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.141462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.141468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.141482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.154 [2024-11-27 05:50:29.151419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.154 [2024-11-27 05:50:29.151474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.154 [2024-11-27 05:50:29.151488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.154 [2024-11-27 05:50:29.151494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.154 [2024-11-27 05:50:29.151500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.154 [2024-11-27 05:50:29.151514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.154 qpair failed and we were unable to recover it. 00:28:41.413 [2024-11-27 05:50:29.161448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.413 [2024-11-27 05:50:29.161504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.413 [2024-11-27 05:50:29.161517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.413 [2024-11-27 05:50:29.161524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.413 [2024-11-27 05:50:29.161530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.413 [2024-11-27 05:50:29.161544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.413 qpair failed and we were unable to recover it. 00:28:41.413 [2024-11-27 05:50:29.171509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.413 [2024-11-27 05:50:29.171564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.413 [2024-11-27 05:50:29.171581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.413 [2024-11-27 05:50:29.171588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.413 [2024-11-27 05:50:29.171594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.413 [2024-11-27 05:50:29.171608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.413 qpair failed and we were unable to recover it. 00:28:41.413 [2024-11-27 05:50:29.181502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.413 [2024-11-27 05:50:29.181558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.413 [2024-11-27 05:50:29.181571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.413 [2024-11-27 05:50:29.181577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.413 [2024-11-27 05:50:29.181583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.413 [2024-11-27 05:50:29.181597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.413 qpair failed and we were unable to recover it. 00:28:41.413 [2024-11-27 05:50:29.191528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.413 [2024-11-27 05:50:29.191578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.191591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.191598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.191604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.191618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.201578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.201632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.201645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.201652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.201658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.201675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.211595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.211653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.211666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.211676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.211685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.211700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.221630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.221684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.221697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.221703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.221709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.221723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.231651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.231710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.231723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.231730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.231736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.231750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.241693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.241746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.241760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.241767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.241773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.241787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.251721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.251774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.251787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.251793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.251799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.251813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.261742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.261814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.261827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.261834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.261839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.261853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.271764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.271816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.271829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.271836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.271841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.271855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.281797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.281852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.281865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.281871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.281877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.281891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.291918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.291981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.291995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.292001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.292007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.292021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.301878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.301930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.301947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.301953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.301959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.301973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.311922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.311978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.311992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.311999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.312005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.312018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.321954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.322010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.322023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.322030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.322036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.322049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.331967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.332028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.332043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.332049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.332055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.332069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.341966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.342018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.342031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.342038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.342047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.342061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.352008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.352061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.352075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.352081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.352087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.352101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.362081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.414 [2024-11-27 05:50:29.362182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.414 [2024-11-27 05:50:29.362196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.414 [2024-11-27 05:50:29.362203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.414 [2024-11-27 05:50:29.362209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.414 [2024-11-27 05:50:29.362223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.414 qpair failed and we were unable to recover it. 00:28:41.414 [2024-11-27 05:50:29.372078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.415 [2024-11-27 05:50:29.372128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.415 [2024-11-27 05:50:29.372141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.415 [2024-11-27 05:50:29.372147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.415 [2024-11-27 05:50:29.372153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.415 [2024-11-27 05:50:29.372167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.415 qpair failed and we were unable to recover it. 00:28:41.415 [2024-11-27 05:50:29.382113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.415 [2024-11-27 05:50:29.382217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.415 [2024-11-27 05:50:29.382231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.415 [2024-11-27 05:50:29.382237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.415 [2024-11-27 05:50:29.382243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.415 [2024-11-27 05:50:29.382257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.415 qpair failed and we were unable to recover it. 00:28:41.415 [2024-11-27 05:50:29.392069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.415 [2024-11-27 05:50:29.392119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.415 [2024-11-27 05:50:29.392132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.415 [2024-11-27 05:50:29.392139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.415 [2024-11-27 05:50:29.392145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.415 [2024-11-27 05:50:29.392159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.415 qpair failed and we were unable to recover it. 00:28:41.415 [2024-11-27 05:50:29.402141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.415 [2024-11-27 05:50:29.402230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.415 [2024-11-27 05:50:29.402243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.415 [2024-11-27 05:50:29.402249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.415 [2024-11-27 05:50:29.402254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.415 [2024-11-27 05:50:29.402268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.415 qpair failed and we were unable to recover it. 00:28:41.415 [2024-11-27 05:50:29.412158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.415 [2024-11-27 05:50:29.412215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.415 [2024-11-27 05:50:29.412228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.415 [2024-11-27 05:50:29.412235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.415 [2024-11-27 05:50:29.412241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.415 [2024-11-27 05:50:29.412255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.415 qpair failed and we were unable to recover it. 00:28:41.674 [2024-11-27 05:50:29.422211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.422266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.422280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.422286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.422293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.422306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.432219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.432269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.432286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.432293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.432298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.432312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.442251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.442306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.442320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.442326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.442332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.442346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.452200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.452253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.452266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.452272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.452278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.452292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.462284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.462333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.462346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.462352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.462357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.462372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.472341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.472391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.472404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.472411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.472419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.472433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.482371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.482446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.482459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.482466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.482471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.482486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.492404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.492458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.492471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.492478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.492483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.492498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.502434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.502487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.502501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.502507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.502513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.502527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.512441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.512492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.512505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.675 [2024-11-27 05:50:29.512512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.675 [2024-11-27 05:50:29.512518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.675 [2024-11-27 05:50:29.512531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.675 qpair failed and we were unable to recover it. 00:28:41.675 [2024-11-27 05:50:29.522478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.675 [2024-11-27 05:50:29.522534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.675 [2024-11-27 05:50:29.522547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.522554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.522559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.522574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.532542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.532603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.532618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.532625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.532631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.532645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.542524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.542576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.542590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.542596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.542602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.542616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.552546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.552598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.552611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.552618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.552623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.552637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.562590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.562645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.562661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.562667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.562677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.562691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.572612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.572664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.572679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.572686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.572692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.572706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.582665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.582723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.582736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.582743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.582749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.582762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.592676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.592731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.592746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.592753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.592758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.592773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.602721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.602775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.602788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.602794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.602806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.602821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.612754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.612811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.612825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.612831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.612838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.612851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.622748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.622806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.622821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.676 [2024-11-27 05:50:29.622827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.676 [2024-11-27 05:50:29.622833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.676 [2024-11-27 05:50:29.622847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.676 qpair failed and we were unable to recover it. 00:28:41.676 [2024-11-27 05:50:29.632769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.676 [2024-11-27 05:50:29.632820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.676 [2024-11-27 05:50:29.632834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.677 [2024-11-27 05:50:29.632840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.677 [2024-11-27 05:50:29.632847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.677 [2024-11-27 05:50:29.632861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.677 qpair failed and we were unable to recover it. 00:28:41.677 [2024-11-27 05:50:29.642822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.677 [2024-11-27 05:50:29.642878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.677 [2024-11-27 05:50:29.642892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.677 [2024-11-27 05:50:29.642898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.677 [2024-11-27 05:50:29.642904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.677 [2024-11-27 05:50:29.642918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.677 qpair failed and we were unable to recover it. 00:28:41.677 [2024-11-27 05:50:29.652884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.677 [2024-11-27 05:50:29.652939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.677 [2024-11-27 05:50:29.652953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.677 [2024-11-27 05:50:29.652959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.677 [2024-11-27 05:50:29.652965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.677 [2024-11-27 05:50:29.652979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.677 qpair failed and we were unable to recover it. 00:28:41.677 [2024-11-27 05:50:29.662863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.677 [2024-11-27 05:50:29.662934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.677 [2024-11-27 05:50:29.662947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.677 [2024-11-27 05:50:29.662953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.677 [2024-11-27 05:50:29.662959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.677 [2024-11-27 05:50:29.662973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.677 qpair failed and we were unable to recover it. 00:28:41.677 [2024-11-27 05:50:29.672881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.677 [2024-11-27 05:50:29.672932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.677 [2024-11-27 05:50:29.672945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.677 [2024-11-27 05:50:29.672952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.677 [2024-11-27 05:50:29.672958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.677 [2024-11-27 05:50:29.672972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.677 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-27 05:50:29.682843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.937 [2024-11-27 05:50:29.682902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.937 [2024-11-27 05:50:29.682916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.937 [2024-11-27 05:50:29.682923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.937 [2024-11-27 05:50:29.682929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.937 [2024-11-27 05:50:29.682943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-27 05:50:29.692957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.937 [2024-11-27 05:50:29.693013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.937 [2024-11-27 05:50:29.693029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.937 [2024-11-27 05:50:29.693036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.937 [2024-11-27 05:50:29.693042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.937 [2024-11-27 05:50:29.693055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.937 qpair failed and we were unable to recover it. 00:28:41.937 [2024-11-27 05:50:29.702984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.937 [2024-11-27 05:50:29.703032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.937 [2024-11-27 05:50:29.703045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.937 [2024-11-27 05:50:29.703052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.937 [2024-11-27 05:50:29.703057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.703071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.712940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.713032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.713045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.713051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.713057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.713071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.723046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.723098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.723112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.723118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.723124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.723137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.733097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.733154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.733167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.733177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.733183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.733197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.743061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.743154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.743169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.743175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.743181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.743195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.753122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.753171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.753185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.753191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.753196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.753210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.763161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.763214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.763228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.763234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.763240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.763254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.773182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.773255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.773267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.773274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.773280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.773294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.783127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.783188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.783201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.783208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.783213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.783228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.793160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.793230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.793244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.793250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.793256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.938 [2024-11-27 05:50:29.793270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.938 qpair failed and we were unable to recover it. 00:28:41.938 [2024-11-27 05:50:29.803268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.938 [2024-11-27 05:50:29.803325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.938 [2024-11-27 05:50:29.803338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.938 [2024-11-27 05:50:29.803344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.938 [2024-11-27 05:50:29.803350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.803364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.813305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.813362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.813376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.813383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.813389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.813403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.823315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.823369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.823386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.823393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.823399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.823413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.833342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.833397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.833411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.833418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.833424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.833438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.843316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.843372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.843385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.843392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.843398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.843412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.853405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.853463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.853476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.853482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.853488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.853502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.863450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.863518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.863531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.863541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.863547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.863561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.873479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.873533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.873547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.873554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.873560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.873574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.883495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.883556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.883570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.883577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.883583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.883596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.893519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.893571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.893585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.893591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.893597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.893611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.939 [2024-11-27 05:50:29.903577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.939 [2024-11-27 05:50:29.903640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.939 [2024-11-27 05:50:29.903654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.939 [2024-11-27 05:50:29.903662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.939 [2024-11-27 05:50:29.903668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.939 [2024-11-27 05:50:29.903687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.939 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-27 05:50:29.913566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.940 [2024-11-27 05:50:29.913618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.940 [2024-11-27 05:50:29.913632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.940 [2024-11-27 05:50:29.913639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.940 [2024-11-27 05:50:29.913645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.940 [2024-11-27 05:50:29.913658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-27 05:50:29.923530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.940 [2024-11-27 05:50:29.923588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.940 [2024-11-27 05:50:29.923602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.940 [2024-11-27 05:50:29.923609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.940 [2024-11-27 05:50:29.923615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.940 [2024-11-27 05:50:29.923629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.940 qpair failed and we were unable to recover it. 00:28:41.940 [2024-11-27 05:50:29.933569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:41.940 [2024-11-27 05:50:29.933624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:41.940 [2024-11-27 05:50:29.933638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:41.940 [2024-11-27 05:50:29.933645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:41.940 [2024-11-27 05:50:29.933651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:41.940 [2024-11-27 05:50:29.933666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:41.940 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:29.943649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:29.943709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:29.943723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:29.943730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:29.943736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:29.943750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:29.953664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:29.953723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:29.953743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:29.953750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:29.953756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:29.953772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:29.963660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:29.963721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:29.963735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:29.963742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:29.963748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:29.963763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:29.973762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:29.973818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:29.973832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:29.973838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:29.973844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:29.973858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:29.983765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:29.983820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:29.983834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:29.983840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:29.983846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:29.983861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:29.993780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:29.993832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:29.993845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:29.993855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:29.993861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:29.993876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.003834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.003912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.003926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.003932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.003938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:30.003954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.014424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.014742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.014813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.014822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.014829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:30.014850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.023920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.023985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.023998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.024005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.024011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:30.024025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.033864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.033917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.033931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.033937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.033943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:30.033957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.043997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.044097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.044114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.044122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.044129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:30.044146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.053942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.054025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.054040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.054047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.054053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:30.054068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.064014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.064070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.064084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.064091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.064097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.206 [2024-11-27 05:50:30.064111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.206 qpair failed and we were unable to recover it. 00:28:42.206 [2024-11-27 05:50:30.073985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.206 [2024-11-27 05:50:30.074066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.206 [2024-11-27 05:50:30.074079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.206 [2024-11-27 05:50:30.074086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.206 [2024-11-27 05:50:30.074092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.074106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.084078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.084137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.084151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.084157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.084164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.084178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.094050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.094107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.094121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.094127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.094133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.094147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.104168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.104240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.104255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.104262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.104267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.104281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.114163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.114211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.114224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.114231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.114237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.114252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.124191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.124292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.124306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.124316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.124322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.124337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.134213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.134268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.134283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.134289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.134295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.134309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.144177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.144230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.144244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.144250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.144256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.144270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.154207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.154279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.154293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.154299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.154305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.154318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.164247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.164302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.164319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.164326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.164332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.164347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.174355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.174428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.174442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.174448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.174454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.174468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.184374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.184457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.184470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.184477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.184483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.184497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.194353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.194445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.194458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.194465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.194471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.194484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.207 [2024-11-27 05:50:30.204355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.207 [2024-11-27 05:50:30.204410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.207 [2024-11-27 05:50:30.204423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.207 [2024-11-27 05:50:30.204429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.207 [2024-11-27 05:50:30.204435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.207 [2024-11-27 05:50:30.204450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.207 qpair failed and we were unable to recover it. 00:28:42.467 [2024-11-27 05:50:30.214470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.467 [2024-11-27 05:50:30.214528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.467 [2024-11-27 05:50:30.214542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.467 [2024-11-27 05:50:30.214548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.467 [2024-11-27 05:50:30.214554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.467 [2024-11-27 05:50:30.214568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.224481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.224533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.224547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.224554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.224559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.224573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.234499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.234553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.234567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.234574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.234580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.234594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.244596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.244651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.244664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.244675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.244681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.244695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.254560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.254613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.254626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.254636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.254642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.254656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.264525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.264582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.264595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.264602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.264608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.264621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.274636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.274693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.274706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.274713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.274718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.274732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.284667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.284726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.284740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.284746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.284752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.284766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.294636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.294721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.294735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.294742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.294748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.294762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.304732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.304801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.304815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.304822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.304827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.304842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.314748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.314797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.314811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.314817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.314823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.314837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.324836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.324937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.324950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.324957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.324963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.324977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.334823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.334877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.334891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.334897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.334903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.334918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.468 qpair failed and we were unable to recover it. 00:28:42.468 [2024-11-27 05:50:30.344832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.468 [2024-11-27 05:50:30.344886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.468 [2024-11-27 05:50:30.344900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.468 [2024-11-27 05:50:30.344906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.468 [2024-11-27 05:50:30.344912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.468 [2024-11-27 05:50:30.344926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.354923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.355024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.355037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.355044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.355050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.355064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.364895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.364951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.364966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.364973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.364980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.364994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.374912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.374967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.374982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.374989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.374995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.375009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.384939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.384989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.385002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.385015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.385021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.385036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.394969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.395019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.395033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.395039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.395045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.395059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.404997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.405070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.405083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.405090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.405095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.405110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.415021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.415073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.415087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.415093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.415099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.415113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.424973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.425022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.425035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.425042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.425048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.425064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.435109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.435158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.435172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.435178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.435184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.435198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.445110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.445165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.445178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.445184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.445191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.445204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.455123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.455207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.455220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.455227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.455232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.455246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.469 [2024-11-27 05:50:30.465211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.469 [2024-11-27 05:50:30.465275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.469 [2024-11-27 05:50:30.465288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.469 [2024-11-27 05:50:30.465295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.469 [2024-11-27 05:50:30.465300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.469 [2024-11-27 05:50:30.465314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.469 qpair failed and we were unable to recover it. 00:28:42.730 [2024-11-27 05:50:30.475134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.475219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.475232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.475238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.475244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.475257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.485234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.485290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.485303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.485310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.485316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.485329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.495242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.495302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.495315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.495321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.495327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.495340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.505270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.505321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.505333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.505340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.505346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.505359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.515295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.515357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.515370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.515380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.515385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.515399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.525354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.525412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.525425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.525432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.525438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.525452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.535383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.535442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.535455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.535462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.535468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.535482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.545396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.545464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.545478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.545484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.545490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.545504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.555403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.555456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.555469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.555476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.555481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.555498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.565435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.565491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.565505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.565511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.565517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.565531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.575467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.575520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.575535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.575541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.575547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.575561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.585491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.585550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.585564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.585571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.585577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.585591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.731 qpair failed and we were unable to recover it. 00:28:42.731 [2024-11-27 05:50:30.595514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.731 [2024-11-27 05:50:30.595573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.731 [2024-11-27 05:50:30.595587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.731 [2024-11-27 05:50:30.595593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.731 [2024-11-27 05:50:30.595599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.731 [2024-11-27 05:50:30.595613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.605559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.605623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.605636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.605643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.605649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.605663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.615588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.615643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.615658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.615665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.615675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.615690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.625614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.625673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.625687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.625694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.625699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.625714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.635679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.635735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.635749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.635755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.635761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.635775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.645708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.645780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.645794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.645804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.645810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.645824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.655709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.655763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.655777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.655783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.655789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.655803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.665726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.665776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.665789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.665796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.665802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.665815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.675801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.675855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.675867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.675874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.675880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.675895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.685798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.685853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.685866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.685873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.685879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.685896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.695861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.695924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.695938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.695944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.695949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.695963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.705851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.705902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.705916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.705922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.705928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.705942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.715883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.715939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.715952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.715958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.715964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.732 [2024-11-27 05:50:30.715977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.732 qpair failed and we were unable to recover it. 00:28:42.732 [2024-11-27 05:50:30.725917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.732 [2024-11-27 05:50:30.725970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.732 [2024-11-27 05:50:30.725984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.732 [2024-11-27 05:50:30.725991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.732 [2024-11-27 05:50:30.725997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.733 [2024-11-27 05:50:30.726011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.733 qpair failed and we were unable to recover it. 00:28:42.993 [2024-11-27 05:50:30.735974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.993 [2024-11-27 05:50:30.736033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.993 [2024-11-27 05:50:30.736046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.993 [2024-11-27 05:50:30.736052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.993 [2024-11-27 05:50:30.736058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.993 [2024-11-27 05:50:30.736072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.993 qpair failed and we were unable to recover it. 00:28:42.993 [2024-11-27 05:50:30.745975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.993 [2024-11-27 05:50:30.746028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.993 [2024-11-27 05:50:30.746041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.993 [2024-11-27 05:50:30.746047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.993 [2024-11-27 05:50:30.746053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.993 [2024-11-27 05:50:30.746067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.993 qpair failed and we were unable to recover it. 00:28:42.993 [2024-11-27 05:50:30.756034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.993 [2024-11-27 05:50:30.756092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.993 [2024-11-27 05:50:30.756105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.756112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.756117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.756132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.766024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.766077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.766090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.766096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.766102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.766116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.776044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.776099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.776113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.776123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.776128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.776142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.786116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.786167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.786183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.786190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.786196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.786211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.796092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.796146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.796159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.796166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.796172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.796186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.806132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.806186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.806199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.806206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.806212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.806226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.816146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.816202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.816216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.816222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.816229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.816246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.826177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.826253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.826266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.826272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.826278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.826292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.836214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.836263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.836276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.836283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.836289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.836302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.846243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.846301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.846315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.846321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.846327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.846341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.856282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.856335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.856349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.856355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.856361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.856375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.866292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.866345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.866360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.866367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.866375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.866390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.994 [2024-11-27 05:50:30.876358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.994 [2024-11-27 05:50:30.876408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.994 [2024-11-27 05:50:30.876422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.994 [2024-11-27 05:50:30.876429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.994 [2024-11-27 05:50:30.876435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.994 [2024-11-27 05:50:30.876449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.994 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.886401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.886503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.886516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.886523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.886529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.886543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.896389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.896441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.896454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.896460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.896466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.896480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.906413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.906470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.906484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.906494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.906499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.906513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.916482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.916533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.916546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.916553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.916559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.916573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.926470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.926544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.926557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.926563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.926569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.926583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.936497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.936581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.936595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.936601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.936607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.936621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.946537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.946596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.946609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.946615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.946621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.946640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.956545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.956623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.956639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.956646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.956652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.956667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.966612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.966665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.966682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.966689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.966695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.966709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.976614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.976668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.976685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.976692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.976698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.976713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:42.995 [2024-11-27 05:50:30.986663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:42.995 [2024-11-27 05:50:30.986720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:42.995 [2024-11-27 05:50:30.986734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:42.995 [2024-11-27 05:50:30.986740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:42.995 [2024-11-27 05:50:30.986746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:42.995 [2024-11-27 05:50:30.986772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:42.995 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:30.996657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.256 [2024-11-27 05:50:30.996724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.256 [2024-11-27 05:50:30.996740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.256 [2024-11-27 05:50:30.996747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.256 [2024-11-27 05:50:30.996753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.256 [2024-11-27 05:50:30.996768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.256 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:31.006722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.256 [2024-11-27 05:50:31.006781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.256 [2024-11-27 05:50:31.006795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.256 [2024-11-27 05:50:31.006801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.256 [2024-11-27 05:50:31.006807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.256 [2024-11-27 05:50:31.006821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.256 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:31.016730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.256 [2024-11-27 05:50:31.016779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.256 [2024-11-27 05:50:31.016792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.256 [2024-11-27 05:50:31.016799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.256 [2024-11-27 05:50:31.016805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.256 [2024-11-27 05:50:31.016820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.256 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:31.026746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.256 [2024-11-27 05:50:31.026800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.256 [2024-11-27 05:50:31.026813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.256 [2024-11-27 05:50:31.026821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.256 [2024-11-27 05:50:31.026827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.256 [2024-11-27 05:50:31.026841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.256 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:31.036786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.256 [2024-11-27 05:50:31.036839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.256 [2024-11-27 05:50:31.036853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.256 [2024-11-27 05:50:31.036863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.256 [2024-11-27 05:50:31.036869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.256 [2024-11-27 05:50:31.036884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.256 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:31.046859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.256 [2024-11-27 05:50:31.046962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.256 [2024-11-27 05:50:31.046976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.256 [2024-11-27 05:50:31.046982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.256 [2024-11-27 05:50:31.046989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.256 [2024-11-27 05:50:31.047003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.256 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:31.056845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.256 [2024-11-27 05:50:31.056896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.256 [2024-11-27 05:50:31.056909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.256 [2024-11-27 05:50:31.056916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.256 [2024-11-27 05:50:31.056922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.256 [2024-11-27 05:50:31.056935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.256 qpair failed and we were unable to recover it. 00:28:43.256 [2024-11-27 05:50:31.066866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.066923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.066937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.066943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.066949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.066963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.076910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.076995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.077009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.077015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.077021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.077038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.086933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.086994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.087007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.087014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.087020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.087033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.096966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.097016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.097029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.097036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.097042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.097055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.107014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.107079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.107094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.107100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.107106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.107120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.117009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.117064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.117077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.117084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.117090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.117104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.127051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.127105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.127118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.127125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.127130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.127144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.137017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.137095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.137108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.137115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.137121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.137134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.147086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.147148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.147162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.147168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.147174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.147188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.157117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.157172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.157185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.157191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.157197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.157211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.167189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.167244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.167258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.167268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.167274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.167288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.177192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.177248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.177262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.177268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.177274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.177287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.187226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.257 [2024-11-27 05:50:31.187280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.257 [2024-11-27 05:50:31.187293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.257 [2024-11-27 05:50:31.187300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.257 [2024-11-27 05:50:31.187306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.257 [2024-11-27 05:50:31.187319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.257 qpair failed and we were unable to recover it. 00:28:43.257 [2024-11-27 05:50:31.197202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.258 [2024-11-27 05:50:31.197257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.258 [2024-11-27 05:50:31.197272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.258 [2024-11-27 05:50:31.197279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.258 [2024-11-27 05:50:31.197285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.258 [2024-11-27 05:50:31.197300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.258 qpair failed and we were unable to recover it. 00:28:43.258 [2024-11-27 05:50:31.207208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.258 [2024-11-27 05:50:31.207305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.258 [2024-11-27 05:50:31.207319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.258 [2024-11-27 05:50:31.207325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.258 [2024-11-27 05:50:31.207330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.258 [2024-11-27 05:50:31.207348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.258 qpair failed and we were unable to recover it. 00:28:43.258 [2024-11-27 05:50:31.217306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.258 [2024-11-27 05:50:31.217361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.258 [2024-11-27 05:50:31.217375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.258 [2024-11-27 05:50:31.217381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.258 [2024-11-27 05:50:31.217387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.258 [2024-11-27 05:50:31.217401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.258 qpair failed and we were unable to recover it. 00:28:43.258 [2024-11-27 05:50:31.227266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.258 [2024-11-27 05:50:31.227327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.258 [2024-11-27 05:50:31.227340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.258 [2024-11-27 05:50:31.227346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.258 [2024-11-27 05:50:31.227352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.258 [2024-11-27 05:50:31.227367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.258 qpair failed and we were unable to recover it. 00:28:43.258 [2024-11-27 05:50:31.237291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.258 [2024-11-27 05:50:31.237389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.258 [2024-11-27 05:50:31.237402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.258 [2024-11-27 05:50:31.237408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.258 [2024-11-27 05:50:31.237414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.258 [2024-11-27 05:50:31.237429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.258 qpair failed and we were unable to recover it. 00:28:43.258 [2024-11-27 05:50:31.247388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.258 [2024-11-27 05:50:31.247447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.258 [2024-11-27 05:50:31.247461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.258 [2024-11-27 05:50:31.247467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.258 [2024-11-27 05:50:31.247473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.258 [2024-11-27 05:50:31.247487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.258 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.257425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.257484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.257498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.257505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.257511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.257525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.267426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.267483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.267497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.267505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.267511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.267525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.277404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.277460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.277473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.277480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.277486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.277500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.287524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.287582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.287595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.287602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.287608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.287622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.297467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.297523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.297536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.297546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.297552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.297566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.307567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.307620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.307633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.307640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.307646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.307660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.317532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.317587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.317600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.317606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.317612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.317626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.327547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.327599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.518 [2024-11-27 05:50:31.327612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.518 [2024-11-27 05:50:31.327619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.518 [2024-11-27 05:50:31.327624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.518 [2024-11-27 05:50:31.327638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.518 qpair failed and we were unable to recover it. 00:28:43.518 [2024-11-27 05:50:31.337590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.518 [2024-11-27 05:50:31.337641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.337655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.337662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.337668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.337695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.347685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.347777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.347790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.347797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.347802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.347816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.357721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.357774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.357788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.357794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.357800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.357814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.367734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.367790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.367803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.367810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.367816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.367830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.377813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.377870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.377883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.377890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.377896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.377910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.387776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.387836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.387851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.387857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.387863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.387878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.397766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.397816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.397830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.397837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.397843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.397857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.407941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.407994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.408009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.408015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.408022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.408036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.417867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.417923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.417936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.417943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.417949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.417963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.427918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.427972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.427986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.427995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.428001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.428015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.437871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.437923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.437937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.437943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.437949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.437964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.447996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.448050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.448065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.448071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.448077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.448091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.519 [2024-11-27 05:50:31.457997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.519 [2024-11-27 05:50:31.458052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.519 [2024-11-27 05:50:31.458065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.519 [2024-11-27 05:50:31.458072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.519 [2024-11-27 05:50:31.458078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.519 [2024-11-27 05:50:31.458092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.519 qpair failed and we were unable to recover it. 00:28:43.520 [2024-11-27 05:50:31.468027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.520 [2024-11-27 05:50:31.468079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.520 [2024-11-27 05:50:31.468092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.520 [2024-11-27 05:50:31.468098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.520 [2024-11-27 05:50:31.468104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.520 [2024-11-27 05:50:31.468121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.520 qpair failed and we were unable to recover it. 00:28:43.520 [2024-11-27 05:50:31.478059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.520 [2024-11-27 05:50:31.478113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.520 [2024-11-27 05:50:31.478126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.520 [2024-11-27 05:50:31.478132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.520 [2024-11-27 05:50:31.478138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.520 [2024-11-27 05:50:31.478152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.520 qpair failed and we were unable to recover it. 00:28:43.520 [2024-11-27 05:50:31.488056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.520 [2024-11-27 05:50:31.488114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.520 [2024-11-27 05:50:31.488128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.520 [2024-11-27 05:50:31.488134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.520 [2024-11-27 05:50:31.488140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.520 [2024-11-27 05:50:31.488155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.520 qpair failed and we were unable to recover it. 00:28:43.520 [2024-11-27 05:50:31.498097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.520 [2024-11-27 05:50:31.498152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.520 [2024-11-27 05:50:31.498166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.520 [2024-11-27 05:50:31.498172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.520 [2024-11-27 05:50:31.498178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.520 [2024-11-27 05:50:31.498192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.520 qpair failed and we were unable to recover it. 00:28:43.520 [2024-11-27 05:50:31.508107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.520 [2024-11-27 05:50:31.508163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.520 [2024-11-27 05:50:31.508176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.520 [2024-11-27 05:50:31.508182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.520 [2024-11-27 05:50:31.508188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.520 [2024-11-27 05:50:31.508202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.520 qpair failed and we were unable to recover it. 00:28:43.520 [2024-11-27 05:50:31.518157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.520 [2024-11-27 05:50:31.518212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.520 [2024-11-27 05:50:31.518226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.520 [2024-11-27 05:50:31.518232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.520 [2024-11-27 05:50:31.518238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.520 [2024-11-27 05:50:31.518252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.520 qpair failed and we were unable to recover it. 00:28:43.781 [2024-11-27 05:50:31.528145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.781 [2024-11-27 05:50:31.528201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.781 [2024-11-27 05:50:31.528214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.781 [2024-11-27 05:50:31.528221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.781 [2024-11-27 05:50:31.528227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.781 [2024-11-27 05:50:31.528241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.781 qpair failed and we were unable to recover it. 00:28:43.781 [2024-11-27 05:50:31.538142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.781 [2024-11-27 05:50:31.538198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.781 [2024-11-27 05:50:31.538212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.781 [2024-11-27 05:50:31.538218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.781 [2024-11-27 05:50:31.538224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.781 [2024-11-27 05:50:31.538238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.781 qpair failed and we were unable to recover it. 00:28:43.781 [2024-11-27 05:50:31.548257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.781 [2024-11-27 05:50:31.548311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.548324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.548331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.548337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.548351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.558292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.558346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.558363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.558370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.558376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.558390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.568304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.568361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.568374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.568380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.568386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.568400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.578309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.578398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.578411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.578417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.578423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.578437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.588359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.588443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.588456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.588462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.588468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.588481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.598326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.598380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.598393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.598400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.598406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.598423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.608399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.608482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.608496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.608502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.608508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.608522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.618419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.618487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.618501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.618508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.618514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.618528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.628498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.628560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.628574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.628580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.628586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.628600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.638551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.638615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.638630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.638637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.638643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.638658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.648481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.648565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.648579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.648585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.648591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.648605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.658565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.658615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.658629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.658635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.658641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.658655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.668596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.668647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.782 [2024-11-27 05:50:31.668660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.782 [2024-11-27 05:50:31.668666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.782 [2024-11-27 05:50:31.668676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.782 [2024-11-27 05:50:31.668690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.782 qpair failed and we were unable to recover it. 00:28:43.782 [2024-11-27 05:50:31.678637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.782 [2024-11-27 05:50:31.678690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.678704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.678710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.678716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.678730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.688652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.688728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.688745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.688751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.688757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.688772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.698720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.698778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.698791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.698798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.698804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.698818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.708709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.708781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.708794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.708800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.708806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.708820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.718728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.718780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.718793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.718800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.718805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.718819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.728761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.728813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.728826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.728832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.728838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.728855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.738785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.738838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.738852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.738858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.738864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.738878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.748841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.748929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.748942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.748948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.748953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.748967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.758843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.758897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.758909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.758916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.758922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.758935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.768834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.768907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.768920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.768926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.768932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.768946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:43.783 [2024-11-27 05:50:31.778951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:43.783 [2024-11-27 05:50:31.779056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:43.783 [2024-11-27 05:50:31.779069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:43.783 [2024-11-27 05:50:31.779075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:43.783 [2024-11-27 05:50:31.779081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:43.783 [2024-11-27 05:50:31.779095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:43.783 qpair failed and we were unable to recover it. 00:28:44.044 [2024-11-27 05:50:31.788863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.044 [2024-11-27 05:50:31.788918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.044 [2024-11-27 05:50:31.788931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.044 [2024-11-27 05:50:31.788938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.044 [2024-11-27 05:50:31.788944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.044 [2024-11-27 05:50:31.788958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.044 qpair failed and we were unable to recover it. 00:28:44.044 [2024-11-27 05:50:31.798959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.044 [2024-11-27 05:50:31.799009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.044 [2024-11-27 05:50:31.799022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.044 [2024-11-27 05:50:31.799028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.044 [2024-11-27 05:50:31.799034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.044 [2024-11-27 05:50:31.799048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.044 qpair failed and we were unable to recover it. 00:28:44.044 [2024-11-27 05:50:31.809054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.044 [2024-11-27 05:50:31.809153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.044 [2024-11-27 05:50:31.809166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.044 [2024-11-27 05:50:31.809172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.044 [2024-11-27 05:50:31.809178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.044 [2024-11-27 05:50:31.809192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.044 qpair failed and we were unable to recover it. 00:28:44.044 [2024-11-27 05:50:31.819032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.044 [2024-11-27 05:50:31.819085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.819101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.819108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.819113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.819127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.829051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.829103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.829118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.829124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.829130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.829144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.839069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.839123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.839138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.839145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.839150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.839164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.849113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.849168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.849182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.849188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.849194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.849208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.859135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.859191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.859204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.859210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.859216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.859233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.869158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.869211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.869225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.869232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.869237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.869252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.879146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.879198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.879212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.879218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.879224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.879238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.889210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.889266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.889279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.889286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.889293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.889307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.899296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.899358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.899372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.899379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.899385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.899399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.909252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.909304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.909318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.909324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.909330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.909344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.919302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.919350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.919363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.919370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.919376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.919390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.929339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.929401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.929414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.929420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.929426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.929440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.939371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.939421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.045 [2024-11-27 05:50:31.939435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.045 [2024-11-27 05:50:31.939441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.045 [2024-11-27 05:50:31.939447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.045 [2024-11-27 05:50:31.939461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.045 qpair failed and we were unable to recover it. 00:28:44.045 [2024-11-27 05:50:31.949317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.045 [2024-11-27 05:50:31.949401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:31.949417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:31.949423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:31.949429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:31.949443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:31.959451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:31.959513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:31.959529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:31.959536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:31.959542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:31.959557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:31.969448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:31.969514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:31.969528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:31.969534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:31.969540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:31.969555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:31.979523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:31.979576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:31.979590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:31.979596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:31.979602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:31.979617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:31.989517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:31.989571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:31.989585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:31.989591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:31.989597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:31.989614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:31.999518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:31.999573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:31.999587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:31.999593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:31.999599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:31.999613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:32.009569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:32.009621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:32.009635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:32.009641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:32.009647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:32.009661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:32.019602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:32.019657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:32.019695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:32.019703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:32.019708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:32.019723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:32.029620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:32.029676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:32.029693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:32.029699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:32.029705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:32.029720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.046 [2024-11-27 05:50:32.039684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.046 [2024-11-27 05:50:32.039787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.046 [2024-11-27 05:50:32.039800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.046 [2024-11-27 05:50:32.039807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.046 [2024-11-27 05:50:32.039813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.046 [2024-11-27 05:50:32.039827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.046 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.049689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.049753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.049767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.049773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.049779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.049793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.059694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.059750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.059763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.059770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.059775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.059790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.069704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.069754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.069768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.069774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.069780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.069794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.079781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.079834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.079851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.079857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.079863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.079877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.089787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.089843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.089857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.089863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.089869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.089884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.099737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.099792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.099805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.099812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.099818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.099832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.109837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.109889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.109904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.109912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.109918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.109932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.119860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.119937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.119950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.119956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.119965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.119979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.129906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.129961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.129975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.129981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.129987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.130000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.139924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.139981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.139994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.140000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.140006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.140021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.149961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.150014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.150028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.150035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.150040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.150055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.159922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.159975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.159989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.307 [2024-11-27 05:50:32.159995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.307 [2024-11-27 05:50:32.160001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.307 [2024-11-27 05:50:32.160015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.307 qpair failed and we were unable to recover it. 00:28:44.307 [2024-11-27 05:50:32.170011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.307 [2024-11-27 05:50:32.170074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.307 [2024-11-27 05:50:32.170089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.170096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.170101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.170116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.180040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.180089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.180103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.180109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.180115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.180129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.190070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.190123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.190137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.190143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.190149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.190162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.200089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.200146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.200159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.200166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.200172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.200186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.210125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.210180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.210196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.210202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.210208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.210222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.220149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.220204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.220217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.220223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.220229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.220243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.230177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.230228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.230241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.230247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.230253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.230267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.240201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.240260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.240276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.240282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.240288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.240303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.250230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.250287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.250300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.250306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.250316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.250330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.260251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:44.308 [2024-11-27 05:50:32.260305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.308 [2024-11-27 05:50:32.260318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:44.308 [2024-11-27 05:50:32.260325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.308 [2024-11-27 05:50:32.260331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c26be0 00:28:44.308 [2024-11-27 05:50:32.260345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:44.308 qpair failed and we were unable to recover it. 00:28:44.308 [2024-11-27 05:50:32.260486] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:44.308 A controller has encountered a failure and is being reset. 00:28:44.308 Controller properly reset. 00:28:44.308 Initializing NVMe Controllers 00:28:44.308 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:44.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:44.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:44.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:44.308 Initialization complete. Launching workers. 00:28:44.308 Starting thread on core 1 00:28:44.308 Starting thread on core 2 00:28:44.308 Starting thread on core 3 00:28:44.308 Starting thread on core 0 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:44.568 00:28:44.568 real 0m11.252s 00:28:44.568 user 0m21.943s 00:28:44.568 sys 0m4.838s 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.568 ************************************ 00:28:44.568 END TEST nvmf_target_disconnect_tc2 00:28:44.568 ************************************ 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:44.568 rmmod nvme_tcp 00:28:44.568 rmmod nvme_fabrics 00:28:44.568 rmmod nvme_keyring 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1923891 ']' 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1923891 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1923891 ']' 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1923891 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1923891 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1923891' 00:28:44.568 killing process with pid 1923891 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1923891 00:28:44.568 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1923891 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.828 05:50:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.365 05:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:47.366 00:28:47.366 real 0m20.042s 00:28:47.366 user 0m48.999s 00:28:47.366 sys 0m9.825s 00:28:47.366 05:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.366 05:50:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:47.366 ************************************ 00:28:47.366 END TEST nvmf_target_disconnect 00:28:47.366 ************************************ 00:28:47.366 05:50:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:47.366 00:28:47.366 real 5m53.211s 00:28:47.366 user 10m37.026s 00:28:47.366 sys 1m58.205s 00:28:47.366 05:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.366 05:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.366 ************************************ 00:28:47.366 END TEST nvmf_host 00:28:47.366 ************************************ 00:28:47.366 05:50:34 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:47.366 05:50:34 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:47.366 05:50:34 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:47.366 05:50:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:47.366 05:50:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.366 05:50:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:47.366 ************************************ 00:28:47.366 START TEST nvmf_target_core_interrupt_mode 00:28:47.366 ************************************ 00:28:47.366 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:47.366 * Looking for test storage... 00:28:47.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:47.366 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:47.366 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:47.366 05:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.366 --rc genhtml_branch_coverage=1 00:28:47.366 --rc genhtml_function_coverage=1 00:28:47.366 --rc genhtml_legend=1 00:28:47.366 --rc geninfo_all_blocks=1 00:28:47.366 --rc geninfo_unexecuted_blocks=1 00:28:47.366 00:28:47.366 ' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.366 --rc genhtml_branch_coverage=1 00:28:47.366 --rc genhtml_function_coverage=1 00:28:47.366 --rc genhtml_legend=1 00:28:47.366 --rc geninfo_all_blocks=1 00:28:47.366 --rc geninfo_unexecuted_blocks=1 00:28:47.366 00:28:47.366 ' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.366 --rc genhtml_branch_coverage=1 00:28:47.366 --rc genhtml_function_coverage=1 00:28:47.366 --rc genhtml_legend=1 00:28:47.366 --rc geninfo_all_blocks=1 00:28:47.366 --rc geninfo_unexecuted_blocks=1 00:28:47.366 00:28:47.366 ' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:47.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.366 --rc genhtml_branch_coverage=1 00:28:47.366 --rc genhtml_function_coverage=1 00:28:47.366 --rc genhtml_legend=1 00:28:47.366 --rc geninfo_all_blocks=1 00:28:47.366 --rc geninfo_unexecuted_blocks=1 00:28:47.366 00:28:47.366 ' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.366 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:47.367 ************************************ 00:28:47.367 START TEST nvmf_abort 00:28:47.367 ************************************ 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:47.367 * Looking for test storage... 00:28:47.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.367 --rc genhtml_branch_coverage=1 00:28:47.367 --rc genhtml_function_coverage=1 00:28:47.367 --rc genhtml_legend=1 00:28:47.367 --rc geninfo_all_blocks=1 00:28:47.367 --rc geninfo_unexecuted_blocks=1 00:28:47.367 00:28:47.367 ' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.367 --rc genhtml_branch_coverage=1 00:28:47.367 --rc genhtml_function_coverage=1 00:28:47.367 --rc genhtml_legend=1 00:28:47.367 --rc geninfo_all_blocks=1 00:28:47.367 --rc geninfo_unexecuted_blocks=1 00:28:47.367 00:28:47.367 ' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.367 --rc genhtml_branch_coverage=1 00:28:47.367 --rc genhtml_function_coverage=1 00:28:47.367 --rc genhtml_legend=1 00:28:47.367 --rc geninfo_all_blocks=1 00:28:47.367 --rc geninfo_unexecuted_blocks=1 00:28:47.367 00:28:47.367 ' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:47.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.367 --rc genhtml_branch_coverage=1 00:28:47.367 --rc genhtml_function_coverage=1 00:28:47.367 --rc genhtml_legend=1 00:28:47.367 --rc geninfo_all_blocks=1 00:28:47.367 --rc geninfo_unexecuted_blocks=1 00:28:47.367 00:28:47.367 ' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.367 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:47.368 05:50:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.940 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:53.941 Found net devices under 0000:86:00.0: cvl_0_0 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:53.941 Found net devices under 0000:86:00.1: cvl_0_1 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.941 05:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:28:53.941 00:28:53.941 --- 10.0.0.2 ping statistics --- 00:28:53.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.941 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:28:53.941 00:28:53.941 --- 10.0.0.1 ping statistics --- 00:28:53.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.941 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1928429 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1928429 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1928429 ']' 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.941 [2024-11-27 05:50:41.249589] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:53.941 [2024-11-27 05:50:41.250583] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:28:53.941 [2024-11-27 05:50:41.250628] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.941 [2024-11-27 05:50:41.330377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:53.941 [2024-11-27 05:50:41.372149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.941 [2024-11-27 05:50:41.372185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.941 [2024-11-27 05:50:41.372192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.941 [2024-11-27 05:50:41.372198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.941 [2024-11-27 05:50:41.372204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.941 [2024-11-27 05:50:41.373535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.941 [2024-11-27 05:50:41.373642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.941 [2024-11-27 05:50:41.373643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.941 [2024-11-27 05:50:41.443317] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:53.941 [2024-11-27 05:50:41.444155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:53.941 [2024-11-27 05:50:41.444409] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:53.941 [2024-11-27 05:50:41.444509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:53.941 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 [2024-11-27 05:50:41.510416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 Malloc0 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 Delay0 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 [2024-11-27 05:50:41.602441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.942 05:50:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:53.942 [2024-11-27 05:50:41.732003] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:55.847 Initializing NVMe Controllers 00:28:55.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:55.847 controller IO queue size 128 less than required 00:28:55.847 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:55.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:55.847 Initialization complete. Launching workers. 00:28:55.847 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37848 00:28:55.847 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37905, failed to submit 66 00:28:55.847 success 37848, unsuccessful 57, failed 0 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.847 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.847 rmmod nvme_tcp 00:28:55.847 rmmod nvme_fabrics 00:28:56.105 rmmod nvme_keyring 00:28:56.105 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.105 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:56.105 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:56.105 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1928429 ']' 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1928429 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1928429 ']' 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1928429 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1928429 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1928429' 00:28:56.106 killing process with pid 1928429 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1928429 00:28:56.106 05:50:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1928429 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.365 05:50:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.272 00:28:58.272 real 0m11.079s 00:28:58.272 user 0m10.388s 00:28:58.272 sys 0m5.582s 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:58.272 ************************************ 00:28:58.272 END TEST nvmf_abort 00:28:58.272 ************************************ 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:58.272 ************************************ 00:28:58.272 START TEST nvmf_ns_hotplug_stress 00:28:58.272 ************************************ 00:28:58.272 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:58.532 * Looking for test storage... 00:28:58.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.532 --rc genhtml_branch_coverage=1 00:28:58.532 --rc genhtml_function_coverage=1 00:28:58.532 --rc genhtml_legend=1 00:28:58.532 --rc geninfo_all_blocks=1 00:28:58.532 --rc geninfo_unexecuted_blocks=1 00:28:58.532 00:28:58.532 ' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.532 --rc genhtml_branch_coverage=1 00:28:58.532 --rc genhtml_function_coverage=1 00:28:58.532 --rc genhtml_legend=1 00:28:58.532 --rc geninfo_all_blocks=1 00:28:58.532 --rc geninfo_unexecuted_blocks=1 00:28:58.532 00:28:58.532 ' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.532 --rc genhtml_branch_coverage=1 00:28:58.532 --rc genhtml_function_coverage=1 00:28:58.532 --rc genhtml_legend=1 00:28:58.532 --rc geninfo_all_blocks=1 00:28:58.532 --rc geninfo_unexecuted_blocks=1 00:28:58.532 00:28:58.532 ' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:58.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:58.532 --rc genhtml_branch_coverage=1 00:28:58.532 --rc genhtml_function_coverage=1 00:28:58.532 --rc genhtml_legend=1 00:28:58.532 --rc geninfo_all_blocks=1 00:28:58.532 --rc geninfo_unexecuted_blocks=1 00:28:58.532 00:28:58.532 ' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.532 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.533 05:50:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.294 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.294 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:05.295 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:05.295 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:05.295 Found net devices under 0000:86:00.0: cvl_0_0 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:05.295 Found net devices under 0000:86:00.1: cvl_0_1 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.295 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:05.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:29:05.296 00:29:05.296 --- 10.0.0.2 ping statistics --- 00:29:05.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.296 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:29:05.296 00:29:05.296 --- 10.0.0.1 ping statistics --- 00:29:05.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.296 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1932430 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1932430 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1932430 ']' 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:05.296 05:50:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.296 [2024-11-27 05:50:52.439909] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:05.296 [2024-11-27 05:50:52.440827] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:29:05.296 [2024-11-27 05:50:52.440860] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.296 [2024-11-27 05:50:52.520206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.296 [2024-11-27 05:50:52.562457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.296 [2024-11-27 05:50:52.562494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.296 [2024-11-27 05:50:52.562501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.296 [2024-11-27 05:50:52.562506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.296 [2024-11-27 05:50:52.562511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.296 [2024-11-27 05:50:52.563916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.296 [2024-11-27 05:50:52.564022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.296 [2024-11-27 05:50:52.564023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.296 [2024-11-27 05:50:52.632735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:05.296 [2024-11-27 05:50:52.633540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:05.296 [2024-11-27 05:50:52.633866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:05.296 [2024-11-27 05:50:52.633934] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:05.296 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.296 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:05.296 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.296 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.296 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:05.555 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.555 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:05.555 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:05.555 [2024-11-27 05:50:53.480763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.555 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:05.814 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.074 [2024-11-27 05:50:53.853211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.074 05:50:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.074 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:06.333 Malloc0 00:29:06.333 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:06.594 Delay0 00:29:06.594 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.852 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:07.111 NULL1 00:29:07.111 05:50:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:07.111 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1932916 00:29:07.111 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:07.111 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:07.111 05:50:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.490 Read completed with error (sct=0, sc=11) 00:29:08.490 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.490 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.750 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:08.750 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:08.750 true 00:29:08.750 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:08.750 05:50:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.688 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.946 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:09.946 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:09.946 true 00:29:09.946 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:09.946 05:50:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.205 05:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.463 05:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:10.463 05:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:10.722 true 00:29:10.722 05:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:10.722 05:50:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.669 05:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.929 05:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:11.929 05:50:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:12.187 true 00:29:12.187 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:12.188 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.447 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.447 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:12.447 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:12.706 true 00:29:12.706 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:12.706 05:51:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.086 05:51:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.086 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.086 05:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:14.086 05:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:14.345 true 00:29:14.345 05:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:14.345 05:51:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.282 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.282 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:15.282 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:15.540 true 00:29:15.540 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:15.540 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.799 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.059 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:16.059 05:51:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:16.059 true 00:29:16.059 05:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:16.059 05:51:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.437 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.437 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:17.437 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:17.437 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:17.437 true 00:29:17.437 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:17.437 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.696 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.956 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:17.956 05:51:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:18.215 true 00:29:18.215 05:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:18.215 05:51:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.593 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.593 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:19.593 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:19.593 true 00:29:19.852 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:19.852 05:51:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.420 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.680 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:20.680 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:20.938 true 00:29:20.938 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:20.938 05:51:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.197 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.455 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:21.455 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:21.455 true 00:29:21.455 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:21.455 05:51:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:22.835 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:22.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:22.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:22.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:22.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:22.835 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:22.835 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:22.835 true 00:29:23.093 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:23.093 05:51:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.661 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.920 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.920 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.920 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:23.920 05:51:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:24.179 true 00:29:24.179 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:24.179 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.438 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.697 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:24.697 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:24.697 true 00:29:24.697 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:24.697 05:51:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.072 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.072 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.072 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:26.072 05:51:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:26.330 true 00:29:26.330 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:26.330 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.262 05:51:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.262 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:27.262 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:27.521 true 00:29:27.521 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:27.521 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.779 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.779 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:27.779 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:28.037 true 00:29:28.037 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:28.037 05:51:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.414 05:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.414 05:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:29.414 05:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:29.673 true 00:29:29.673 05:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:29.673 05:51:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.610 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:30.610 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:30.610 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:30.869 true 00:29:30.869 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:30.869 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.127 05:51:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.127 05:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:31.127 05:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:31.386 true 00:29:31.386 05:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:31.386 05:51:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.765 05:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.765 05:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:32.765 05:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:33.023 true 00:29:33.023 05:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:33.023 05:51:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.959 05:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.959 05:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:33.959 05:51:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:34.218 true 00:29:34.218 05:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:34.218 05:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.477 05:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.477 05:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:34.477 05:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:34.736 true 00:29:34.736 05:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:34.736 05:51:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.113 05:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:36.113 05:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:36.113 05:51:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:36.372 true 00:29:36.372 05:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:36.372 05:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.310 05:51:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.310 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:37.310 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:37.570 true 00:29:37.570 Initializing NVMe Controllers 00:29:37.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.570 Controller IO queue size 128, less than required. 00:29:37.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.570 Controller IO queue size 128, less than required. 00:29:37.570 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:37.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:37.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:37.570 Initialization complete. Launching workers. 00:29:37.570 ======================================================== 00:29:37.570 Latency(us) 00:29:37.570 Device Information : IOPS MiB/s Average min max 00:29:37.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2123.39 1.04 41799.74 2789.71 1034476.28 00:29:37.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18261.83 8.92 7008.69 1570.15 370970.67 00:29:37.570 ======================================================== 00:29:37.570 Total : 20385.23 9.95 10632.64 1570.15 1034476.28 00:29:37.570 00:29:37.570 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:37.570 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.829 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.829 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:37.829 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:38.088 true 00:29:38.088 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1932916 00:29:38.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1932916) - No such process 00:29:38.088 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1932916 00:29:38.088 05:51:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.347 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:38.606 null0 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:38.606 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:38.866 null1 00:29:38.866 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:38.866 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:38.866 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:39.126 null2 00:29:39.126 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.126 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.126 05:51:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:39.126 null3 00:29:39.126 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.126 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.126 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:39.386 null4 00:29:39.386 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.386 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.386 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:39.643 null5 00:29:39.643 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.643 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.643 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:39.643 null6 00:29:39.643 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.643 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.643 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:39.902 null7 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:39.902 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:39.903 05:51:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1938757 1938758 1938760 1938763 1938764 1938766 1938768 1938771 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:40.164 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.424 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:40.684 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:40.944 05:51:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.204 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.204 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.204 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.205 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.467 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.727 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:41.728 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:41.988 05:51:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.248 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:42.508 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:42.768 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:43.027 05:51:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.287 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.546 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:43.805 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:44.064 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.064 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.064 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.064 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.064 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:44.065 05:51:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:44.065 rmmod nvme_tcp 00:29:44.065 rmmod nvme_fabrics 00:29:44.065 rmmod nvme_keyring 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1932430 ']' 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1932430 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1932430 ']' 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1932430 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.065 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1932430 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1932430' 00:29:44.324 killing process with pid 1932430 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1932430 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1932430 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.324 05:51:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.864 00:29:46.864 real 0m48.089s 00:29:46.864 user 2m57.074s 00:29:46.864 sys 0m19.899s 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:46.864 ************************************ 00:29:46.864 END TEST nvmf_ns_hotplug_stress 00:29:46.864 ************************************ 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:46.864 ************************************ 00:29:46.864 START TEST nvmf_delete_subsystem 00:29:46.864 ************************************ 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:46.864 * Looking for test storage... 00:29:46.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.864 --rc genhtml_branch_coverage=1 00:29:46.864 --rc genhtml_function_coverage=1 00:29:46.864 --rc genhtml_legend=1 00:29:46.864 --rc geninfo_all_blocks=1 00:29:46.864 --rc geninfo_unexecuted_blocks=1 00:29:46.864 00:29:46.864 ' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.864 --rc genhtml_branch_coverage=1 00:29:46.864 --rc genhtml_function_coverage=1 00:29:46.864 --rc genhtml_legend=1 00:29:46.864 --rc geninfo_all_blocks=1 00:29:46.864 --rc geninfo_unexecuted_blocks=1 00:29:46.864 00:29:46.864 ' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.864 --rc genhtml_branch_coverage=1 00:29:46.864 --rc genhtml_function_coverage=1 00:29:46.864 --rc genhtml_legend=1 00:29:46.864 --rc geninfo_all_blocks=1 00:29:46.864 --rc geninfo_unexecuted_blocks=1 00:29:46.864 00:29:46.864 ' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.864 --rc genhtml_branch_coverage=1 00:29:46.864 --rc genhtml_function_coverage=1 00:29:46.864 --rc genhtml_legend=1 00:29:46.864 --rc geninfo_all_blocks=1 00:29:46.864 --rc geninfo_unexecuted_blocks=1 00:29:46.864 00:29:46.864 ' 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:46.864 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.865 05:51:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:53.439 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:53.439 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.439 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:53.439 Found net devices under 0000:86:00.0: cvl_0_0 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:53.440 Found net devices under 0000:86:00.1: cvl_0_1 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:29:53.440 00:29:53.440 --- 10.0.0.2 ping statistics --- 00:29:53.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.440 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:29:53.440 00:29:53.440 --- 10.0.0.1 ping statistics --- 00:29:53.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.440 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1943014 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1943014 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1943014 ']' 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.440 [2024-11-27 05:51:40.548707] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:53.440 [2024-11-27 05:51:40.549666] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:29:53.440 [2024-11-27 05:51:40.549710] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.440 [2024-11-27 05:51:40.630280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:53.440 [2024-11-27 05:51:40.672063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.440 [2024-11-27 05:51:40.672101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.440 [2024-11-27 05:51:40.672108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.440 [2024-11-27 05:51:40.672115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.440 [2024-11-27 05:51:40.672120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.440 [2024-11-27 05:51:40.673338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.440 [2024-11-27 05:51:40.673339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.440 [2024-11-27 05:51:40.741823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:53.440 [2024-11-27 05:51:40.742367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:53.440 [2024-11-27 05:51:40.742581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.440 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.441 [2024-11-27 05:51:40.822178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.441 [2024-11-27 05:51:40.850519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.441 NULL1 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.441 Delay0 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1943147 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:53.441 05:51:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:53.441 [2024-11-27 05:51:40.964884] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:55.344 05:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.344 05:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.344 05:51:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Write completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 Read completed with error (sct=0, sc=8) 00:29:55.344 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 starting I/O failed: -6 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 starting I/O failed: -6 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 [2024-11-27 05:51:43.017736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f5c000c40 is same with the state(6) to be set 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:55.345 Write completed with error (sct=0, sc=8) 00:29:55.345 Read completed with error (sct=0, sc=8) 00:29:56.283 [2024-11-27 05:51:43.978695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c9b0 is same with the state(6) to be set 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 [2024-11-27 05:51:44.020440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1f5c00d680 is same with the state(6) to be set 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 [2024-11-27 05:51:44.021694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9b680 is same with the state(6) to be set 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 [2024-11-27 05:51:44.021942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9b860 is same with the state(6) to be set 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Write completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.283 Read completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Write completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Write completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Write completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Write completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 Write completed with error (sct=0, sc=8) 00:29:56.284 Read completed with error (sct=0, sc=8) 00:29:56.284 [2024-11-27 05:51:44.022490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9b2c0 is same with the state(6) to be set 00:29:56.284 Initializing NVMe Controllers 00:29:56.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.284 Controller IO queue size 128, less than required. 00:29:56.284 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:56.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:56.284 Initialization complete. Launching workers. 00:29:56.284 ======================================================== 00:29:56.284 Latency(us) 00:29:56.284 Device Information : IOPS MiB/s Average min max 00:29:56.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.12 0.09 964358.03 406.75 1043110.78 00:29:56.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.79 0.08 877894.10 240.30 1043740.79 00:29:56.284 ======================================================== 00:29:56.284 Total : 331.91 0.16 924034.05 240.30 1043740.79 00:29:56.284 00:29:56.284 [2024-11-27 05:51:44.023104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c9b0 (9): Bad file descriptor 00:29:56.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:56.284 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.284 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:56.284 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1943147 00:29:56.284 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1943147 00:29:56.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1943147) - No such process 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1943147 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1943147 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1943147 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.543 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:56.802 [2024-11-27 05:51:44.554426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1943621 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:29:56.802 05:51:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:56.802 [2024-11-27 05:51:44.639470] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:57.367 05:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:57.367 05:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:29:57.367 05:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:57.625 05:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:57.625 05:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:29:57.625 05:51:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:58.191 05:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:58.191 05:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:29:58.191 05:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:58.757 05:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:58.757 05:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:29:58.757 05:51:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:59.323 05:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:59.323 05:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:29:59.323 05:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:59.891 05:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:59.891 05:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:29:59.891 05:51:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:59.891 Initializing NVMe Controllers 00:29:59.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.891 Controller IO queue size 128, less than required. 00:29:59.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:59.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:59.891 Initialization complete. Launching workers. 00:29:59.891 ======================================================== 00:29:59.891 Latency(us) 00:29:59.891 Device Information : IOPS MiB/s Average min max 00:29:59.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004136.50 1000163.14 1043351.55 00:29:59.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003779.70 1000159.76 1011285.50 00:29:59.891 ======================================================== 00:29:59.891 Total : 256.00 0.12 1003958.10 1000159.76 1043351.55 00:29:59.891 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1943621 00:30:00.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1943621) - No such process 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1943621 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.150 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.150 rmmod nvme_tcp 00:30:00.150 rmmod nvme_fabrics 00:30:00.150 rmmod nvme_keyring 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1943014 ']' 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1943014 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1943014 ']' 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1943014 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1943014 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1943014' 00:30:00.409 killing process with pid 1943014 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1943014 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1943014 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.409 05:51:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:02.947 00:30:02.947 real 0m16.025s 00:30:02.947 user 0m25.805s 00:30:02.947 sys 0m6.108s 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:02.947 ************************************ 00:30:02.947 END TEST nvmf_delete_subsystem 00:30:02.947 ************************************ 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:02.947 ************************************ 00:30:02.947 START TEST nvmf_host_management 00:30:02.947 ************************************ 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:02.947 * Looking for test storage... 00:30:02.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.947 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.948 --rc genhtml_branch_coverage=1 00:30:02.948 --rc genhtml_function_coverage=1 00:30:02.948 --rc genhtml_legend=1 00:30:02.948 --rc geninfo_all_blocks=1 00:30:02.948 --rc geninfo_unexecuted_blocks=1 00:30:02.948 00:30:02.948 ' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.948 --rc genhtml_branch_coverage=1 00:30:02.948 --rc genhtml_function_coverage=1 00:30:02.948 --rc genhtml_legend=1 00:30:02.948 --rc geninfo_all_blocks=1 00:30:02.948 --rc geninfo_unexecuted_blocks=1 00:30:02.948 00:30:02.948 ' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.948 --rc genhtml_branch_coverage=1 00:30:02.948 --rc genhtml_function_coverage=1 00:30:02.948 --rc genhtml_legend=1 00:30:02.948 --rc geninfo_all_blocks=1 00:30:02.948 --rc geninfo_unexecuted_blocks=1 00:30:02.948 00:30:02.948 ' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.948 --rc genhtml_branch_coverage=1 00:30:02.948 --rc genhtml_function_coverage=1 00:30:02.948 --rc genhtml_legend=1 00:30:02.948 --rc geninfo_all_blocks=1 00:30:02.948 --rc geninfo_unexecuted_blocks=1 00:30:02.948 00:30:02.948 ' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:02.948 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.949 05:51:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:09.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:09.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.520 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:09.521 Found net devices under 0000:86:00.0: cvl_0_0 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:09.521 Found net devices under 0000:86:00.1: cvl_0_1 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:30:09.521 00:30:09.521 --- 10.0.0.2 ping statistics --- 00:30:09.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.521 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:30:09.521 00:30:09.521 --- 10.0.0.1 ping statistics --- 00:30:09.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.521 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1947820 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1947820 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1947820 ']' 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.521 05:51:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.521 [2024-11-27 05:51:56.706469] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:09.521 [2024-11-27 05:51:56.707385] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:30:09.521 [2024-11-27 05:51:56.707419] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.521 [2024-11-27 05:51:56.787361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.521 [2024-11-27 05:51:56.829544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.521 [2024-11-27 05:51:56.829580] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.521 [2024-11-27 05:51:56.829590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.521 [2024-11-27 05:51:56.829595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.521 [2024-11-27 05:51:56.829600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.521 [2024-11-27 05:51:56.831097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.521 [2024-11-27 05:51:56.831198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.521 [2024-11-27 05:51:56.834685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:09.521 [2024-11-27 05:51:56.834688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.521 [2024-11-27 05:51:56.902495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:09.521 [2024-11-27 05:51:56.903056] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:09.521 [2024-11-27 05:51:56.903385] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:09.521 [2024-11-27 05:51:56.903774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:09.521 [2024-11-27 05:51:56.903778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.781 [2024-11-27 05:51:57.583377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.781 Malloc0 00:30:09.781 [2024-11-27 05:51:57.675688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1947898 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1947898 /var/tmp/bdevperf.sock 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1947898 ']' 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:09.781 { 00:30:09.781 "params": { 00:30:09.781 "name": "Nvme$subsystem", 00:30:09.781 "trtype": "$TEST_TRANSPORT", 00:30:09.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.781 "adrfam": "ipv4", 00:30:09.781 "trsvcid": "$NVMF_PORT", 00:30:09.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.781 "hdgst": ${hdgst:-false}, 00:30:09.781 "ddgst": ${ddgst:-false} 00:30:09.781 }, 00:30:09.781 "method": "bdev_nvme_attach_controller" 00:30:09.781 } 00:30:09.781 EOF 00:30:09.781 )") 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:09.781 05:51:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:09.781 "params": { 00:30:09.781 "name": "Nvme0", 00:30:09.781 "trtype": "tcp", 00:30:09.781 "traddr": "10.0.0.2", 00:30:09.781 "adrfam": "ipv4", 00:30:09.781 "trsvcid": "4420", 00:30:09.781 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.781 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:09.781 "hdgst": false, 00:30:09.781 "ddgst": false 00:30:09.781 }, 00:30:09.781 "method": "bdev_nvme_attach_controller" 00:30:09.781 }' 00:30:09.781 [2024-11-27 05:51:57.773432] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:30:09.781 [2024-11-27 05:51:57.773486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1947898 ] 00:30:10.040 [2024-11-27 05:51:57.851087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.040 [2024-11-27 05:51:57.892102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.299 Running I/O for 10 seconds... 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:30:10.299 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:10.558 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:10.558 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:10.558 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:10.558 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.558 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:10.558 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.558 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.819 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.819 [2024-11-27 05:51:58.575124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.819 [2024-11-27 05:51:58.575227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.575534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1252d70 is same with the state(6) to be set 00:30:10.820 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.820 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:10.820 [2024-11-27 05:51:58.580696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.820 [2024-11-27 05:51:58.580728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.820 [2024-11-27 05:51:58.580745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.820 [2024-11-27 05:51:58.580759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:10.820 [2024-11-27 05:51:58.580773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919510 is same with the state(6) to be set 00:30:10.820 [2024-11-27 05:51:58.580824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.820 [2024-11-27 05:51:58.580915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.820 [2024-11-27 05:51:58.580953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.820 [2024-11-27 05:51:58.580960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.580968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.580974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.580982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.580989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.580997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:1 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:10.821 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.821 [2024-11-27 05:51:58.581519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.821 [2024-11-27 05:51:58.581527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.581770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.822 [2024-11-27 05:51:58.581777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.582737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:10.822 task offset: 100992 on job bdev=Nvme0n1 fails 00:30:10.822 00:30:10.822 Latency(us) 00:30:10.822 [2024-11-27T04:51:58.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.822 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:10.822 Job: Nvme0n1 ended in about 0.41 seconds with error 00:30:10.822 Verification LBA range: start 0x0 length 0x400 00:30:10.822 Nvme0n1 : 0.41 1919.72 119.98 155.72 0.00 30029.91 1435.55 27088.21 00:30:10.822 [2024-11-27T04:51:58.826Z] =================================================================================================================== 00:30:10.822 [2024-11-27T04:51:58.826Z] Total : 1919.72 119.98 155.72 0.00 30029.91 1435.55 27088.21 00:30:10.822 [2024-11-27 05:51:58.585080] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:10.822 [2024-11-27 05:51:58.585100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919510 (9): Bad file descriptor 00:30:10.822 [2024-11-27 05:51:58.586097] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:10.822 [2024-11-27 05:51:58.586165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:10.822 [2024-11-27 05:51:58.586187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:10.822 [2024-11-27 05:51:58.586201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:10.822 [2024-11-27 05:51:58.586209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:10.822 [2024-11-27 05:51:58.586215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:10.822 [2024-11-27 05:51:58.586221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1919510 00:30:10.822 [2024-11-27 05:51:58.586239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919510 (9): Bad file descriptor 00:30:10.822 [2024-11-27 05:51:58.586249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:10.822 [2024-11-27 05:51:58.586256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:10.822 [2024-11-27 05:51:58.586264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:10.822 [2024-11-27 05:51:58.586271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:10.822 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.822 05:51:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1947898 00:30:11.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1947898) - No such process 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:11.763 { 00:30:11.763 "params": { 00:30:11.763 "name": "Nvme$subsystem", 00:30:11.763 "trtype": "$TEST_TRANSPORT", 00:30:11.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.763 "adrfam": "ipv4", 00:30:11.763 "trsvcid": "$NVMF_PORT", 00:30:11.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.763 "hdgst": ${hdgst:-false}, 00:30:11.763 "ddgst": ${ddgst:-false} 00:30:11.763 }, 00:30:11.763 "method": "bdev_nvme_attach_controller" 00:30:11.763 } 00:30:11.763 EOF 00:30:11.763 )") 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:11.763 05:51:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:11.763 "params": { 00:30:11.763 "name": "Nvme0", 00:30:11.763 "trtype": "tcp", 00:30:11.763 "traddr": "10.0.0.2", 00:30:11.763 "adrfam": "ipv4", 00:30:11.763 "trsvcid": "4420", 00:30:11.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.763 "hdgst": false, 00:30:11.763 "ddgst": false 00:30:11.763 }, 00:30:11.763 "method": "bdev_nvme_attach_controller" 00:30:11.763 }' 00:30:11.763 [2024-11-27 05:51:59.646390] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:30:11.763 [2024-11-27 05:51:59.646437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1948339 ] 00:30:11.763 [2024-11-27 05:51:59.719995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.763 [2024-11-27 05:51:59.759647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.332 Running I/O for 1 seconds... 00:30:13.270 1984.00 IOPS, 124.00 MiB/s 00:30:13.270 Latency(us) 00:30:13.270 [2024-11-27T04:52:01.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.270 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.270 Verification LBA range: start 0x0 length 0x400 00:30:13.270 Nvme0n1 : 1.00 2039.38 127.46 0.00 0.00 30894.20 7458.62 27213.04 00:30:13.270 [2024-11-27T04:52:01.274Z] =================================================================================================================== 00:30:13.270 [2024-11-27T04:52:01.274Z] Total : 2039.38 127.46 0.00 0.00 30894.20 7458.62 27213.04 00:30:13.270 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.271 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.271 rmmod nvme_tcp 00:30:13.530 rmmod nvme_fabrics 00:30:13.530 rmmod nvme_keyring 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1947820 ']' 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1947820 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1947820 ']' 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1947820 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1947820 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1947820' 00:30:13.530 killing process with pid 1947820 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1947820 00:30:13.530 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1947820 00:30:13.530 [2024-11-27 05:52:01.531812] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.790 05:52:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:15.696 00:30:15.696 real 0m13.110s 00:30:15.696 user 0m18.711s 00:30:15.696 sys 0m6.387s 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:15.696 ************************************ 00:30:15.696 END TEST nvmf_host_management 00:30:15.696 ************************************ 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.696 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:15.956 ************************************ 00:30:15.956 START TEST nvmf_lvol 00:30:15.956 ************************************ 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:15.956 * Looking for test storage... 00:30:15.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:15.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.956 --rc genhtml_branch_coverage=1 00:30:15.956 --rc genhtml_function_coverage=1 00:30:15.956 --rc genhtml_legend=1 00:30:15.956 --rc geninfo_all_blocks=1 00:30:15.956 --rc geninfo_unexecuted_blocks=1 00:30:15.956 00:30:15.956 ' 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:15.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.956 --rc genhtml_branch_coverage=1 00:30:15.956 --rc genhtml_function_coverage=1 00:30:15.956 --rc genhtml_legend=1 00:30:15.956 --rc geninfo_all_blocks=1 00:30:15.956 --rc geninfo_unexecuted_blocks=1 00:30:15.956 00:30:15.956 ' 00:30:15.956 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:15.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.956 --rc genhtml_branch_coverage=1 00:30:15.956 --rc genhtml_function_coverage=1 00:30:15.956 --rc genhtml_legend=1 00:30:15.956 --rc geninfo_all_blocks=1 00:30:15.957 --rc geninfo_unexecuted_blocks=1 00:30:15.957 00:30:15.957 ' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:15.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.957 --rc genhtml_branch_coverage=1 00:30:15.957 --rc genhtml_function_coverage=1 00:30:15.957 --rc genhtml_legend=1 00:30:15.957 --rc geninfo_all_blocks=1 00:30:15.957 --rc geninfo_unexecuted_blocks=1 00:30:15.957 00:30:15.957 ' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.957 05:52:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:22.530 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:22.531 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:22.531 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:22.531 Found net devices under 0000:86:00.0: cvl_0_0 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:22.531 Found net devices under 0000:86:00.1: cvl_0_1 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:22.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:22.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:30:22.531 00:30:22.531 --- 10.0.0.2 ping statistics --- 00:30:22.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.531 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:22.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:22.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:22.531 00:30:22.531 --- 10.0.0.1 ping statistics --- 00:30:22.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:22.531 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:22.531 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1952098 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1952098 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1952098 ']' 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.532 05:52:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.532 [2024-11-27 05:52:09.917329] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:22.532 [2024-11-27 05:52:09.918255] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:30:22.532 [2024-11-27 05:52:09.918290] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.532 [2024-11-27 05:52:09.997778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:22.532 [2024-11-27 05:52:10.044018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.532 [2024-11-27 05:52:10.044058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.532 [2024-11-27 05:52:10.044065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.532 [2024-11-27 05:52:10.044071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.532 [2024-11-27 05:52:10.044076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.532 [2024-11-27 05:52:10.045361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.532 [2024-11-27 05:52:10.045467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.532 [2024-11-27 05:52:10.045468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.532 [2024-11-27 05:52:10.115167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:22.532 [2024-11-27 05:52:10.115917] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:22.532 [2024-11-27 05:52:10.116079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:22.532 [2024-11-27 05:52:10.116235] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:22.792 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.792 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:22.792 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.792 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.792 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:22.792 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.792 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:23.051 [2024-11-27 05:52:10.958241] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.051 05:52:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:23.310 05:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:23.310 05:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:23.569 05:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:23.569 05:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:23.828 05:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:24.086 05:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5765cb95-dde0-42a0-99fe-6ed543ce28a3 00:30:24.087 05:52:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5765cb95-dde0-42a0-99fe-6ed543ce28a3 lvol 20 00:30:24.087 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fb61068e-16eb-4d80-bf5c-16a8709a17b2 00:30:24.087 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:24.345 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fb61068e-16eb-4d80-bf5c-16a8709a17b2 00:30:24.626 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.626 [2024-11-27 05:52:12.570126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.626 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.898 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1952596 00:30:24.898 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:24.898 05:52:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:25.905 05:52:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fb61068e-16eb-4d80-bf5c-16a8709a17b2 MY_SNAPSHOT 00:30:26.177 05:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ffd221ee-6da3-4ac0-a817-e4cad9c41136 00:30:26.177 05:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fb61068e-16eb-4d80-bf5c-16a8709a17b2 30 00:30:26.435 05:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ffd221ee-6da3-4ac0-a817-e4cad9c41136 MY_CLONE 00:30:26.693 05:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b4740bfc-e93d-4fb3-9b82-b57007ec95f6 00:30:26.693 05:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b4740bfc-e93d-4fb3-9b82-b57007ec95f6 00:30:27.260 05:52:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1952596 00:30:35.377 Initializing NVMe Controllers 00:30:35.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:35.377 Controller IO queue size 128, less than required. 00:30:35.377 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:35.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:35.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:35.377 Initialization complete. Launching workers. 00:30:35.377 ======================================================== 00:30:35.377 Latency(us) 00:30:35.377 Device Information : IOPS MiB/s Average min max 00:30:35.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12418.90 48.51 10309.55 1551.59 45477.62 00:30:35.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12534.70 48.96 10210.47 3485.94 47983.23 00:30:35.377 ======================================================== 00:30:35.377 Total : 24953.60 97.47 10259.78 1551.59 47983.23 00:30:35.377 00:30:35.377 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.635 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fb61068e-16eb-4d80-bf5c-16a8709a17b2 00:30:35.635 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5765cb95-dde0-42a0-99fe-6ed543ce28a3 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.894 rmmod nvme_tcp 00:30:35.894 rmmod nvme_fabrics 00:30:35.894 rmmod nvme_keyring 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1952098 ']' 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1952098 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1952098 ']' 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1952098 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.894 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1952098 00:30:36.153 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.153 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.153 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1952098' 00:30:36.153 killing process with pid 1952098 00:30:36.153 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1952098 00:30:36.153 05:52:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1952098 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:36.153 05:52:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:38.690 00:30:38.690 real 0m22.492s 00:30:38.690 user 0m55.801s 00:30:38.690 sys 0m9.861s 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:38.690 ************************************ 00:30:38.690 END TEST nvmf_lvol 00:30:38.690 ************************************ 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:38.690 ************************************ 00:30:38.690 START TEST nvmf_lvs_grow 00:30:38.690 ************************************ 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:38.690 * Looking for test storage... 00:30:38.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.690 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:38.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.691 --rc genhtml_branch_coverage=1 00:30:38.691 --rc genhtml_function_coverage=1 00:30:38.691 --rc genhtml_legend=1 00:30:38.691 --rc geninfo_all_blocks=1 00:30:38.691 --rc geninfo_unexecuted_blocks=1 00:30:38.691 00:30:38.691 ' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:38.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.691 --rc genhtml_branch_coverage=1 00:30:38.691 --rc genhtml_function_coverage=1 00:30:38.691 --rc genhtml_legend=1 00:30:38.691 --rc geninfo_all_blocks=1 00:30:38.691 --rc geninfo_unexecuted_blocks=1 00:30:38.691 00:30:38.691 ' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:38.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.691 --rc genhtml_branch_coverage=1 00:30:38.691 --rc genhtml_function_coverage=1 00:30:38.691 --rc genhtml_legend=1 00:30:38.691 --rc geninfo_all_blocks=1 00:30:38.691 --rc geninfo_unexecuted_blocks=1 00:30:38.691 00:30:38.691 ' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:38.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.691 --rc genhtml_branch_coverage=1 00:30:38.691 --rc genhtml_function_coverage=1 00:30:38.691 --rc genhtml_legend=1 00:30:38.691 --rc geninfo_all_blocks=1 00:30:38.691 --rc geninfo_unexecuted_blocks=1 00:30:38.691 00:30:38.691 ' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.691 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.692 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:38.692 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:38.692 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:38.692 05:52:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.263 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.263 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.263 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.264 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.264 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.264 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:30:45.265 00:30:45.265 --- 10.0.0.2 ping statistics --- 00:30:45.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.265 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:45.265 00:30:45.265 --- 10.0.0.1 ping statistics --- 00:30:45.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.265 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1957741 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1957741 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1957741 ']' 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.265 [2024-11-27 05:52:32.448246] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.265 [2024-11-27 05:52:32.449166] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:30:45.265 [2024-11-27 05:52:32.449199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.265 [2024-11-27 05:52:32.528524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.265 [2024-11-27 05:52:32.568809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.265 [2024-11-27 05:52:32.568846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.265 [2024-11-27 05:52:32.568853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.265 [2024-11-27 05:52:32.568860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.265 [2024-11-27 05:52:32.568865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.265 [2024-11-27 05:52:32.569422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.265 [2024-11-27 05:52:32.636475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.265 [2024-11-27 05:52:32.636711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:45.265 [2024-11-27 05:52:32.870079] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.265 ************************************ 00:30:45.265 START TEST lvs_grow_clean 00:30:45.265 ************************************ 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.265 05:52:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:45.265 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:45.265 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:45.525 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:45.525 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:45.525 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:45.783 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:45.784 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:45.784 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c lvol 150 00:30:45.784 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a456a71c-7be5-4f76-b98a-ecac12a392ef 00:30:45.784 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.784 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:46.042 [2024-11-27 05:52:33.917818] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:46.042 [2024-11-27 05:52:33.917950] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:46.042 true 00:30:46.042 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:46.042 05:52:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:46.301 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:46.301 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:46.560 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a456a71c-7be5-4f76-b98a-ecac12a392ef 00:30:46.560 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.819 [2024-11-27 05:52:34.662296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.819 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1958239 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1958239 /var/tmp/bdevperf.sock 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1958239 ']' 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:47.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.078 05:52:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:47.078 [2024-11-27 05:52:34.889587] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:30:47.078 [2024-11-27 05:52:34.889634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1958239 ] 00:30:47.078 [2024-11-27 05:52:34.965074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.078 [2024-11-27 05:52:35.007369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.337 05:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.337 05:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:47.337 05:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:47.596 Nvme0n1 00:30:47.596 05:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:47.596 [ 00:30:47.596 { 00:30:47.596 "name": "Nvme0n1", 00:30:47.596 "aliases": [ 00:30:47.596 "a456a71c-7be5-4f76-b98a-ecac12a392ef" 00:30:47.596 ], 00:30:47.596 "product_name": "NVMe disk", 00:30:47.596 "block_size": 4096, 00:30:47.596 "num_blocks": 38912, 00:30:47.596 "uuid": "a456a71c-7be5-4f76-b98a-ecac12a392ef", 00:30:47.596 "numa_id": 1, 00:30:47.596 "assigned_rate_limits": { 00:30:47.596 "rw_ios_per_sec": 0, 00:30:47.596 "rw_mbytes_per_sec": 0, 00:30:47.596 "r_mbytes_per_sec": 0, 00:30:47.596 "w_mbytes_per_sec": 0 00:30:47.596 }, 00:30:47.596 "claimed": false, 00:30:47.596 "zoned": false, 00:30:47.596 "supported_io_types": { 00:30:47.596 "read": true, 00:30:47.596 "write": true, 00:30:47.596 "unmap": true, 00:30:47.596 "flush": true, 00:30:47.596 "reset": true, 00:30:47.596 "nvme_admin": true, 00:30:47.596 "nvme_io": true, 00:30:47.596 "nvme_io_md": false, 00:30:47.596 "write_zeroes": true, 00:30:47.596 "zcopy": false, 00:30:47.596 "get_zone_info": false, 00:30:47.596 "zone_management": false, 00:30:47.596 "zone_append": false, 00:30:47.596 "compare": true, 00:30:47.596 "compare_and_write": true, 00:30:47.596 "abort": true, 00:30:47.597 "seek_hole": false, 00:30:47.597 "seek_data": false, 00:30:47.597 "copy": true, 00:30:47.597 "nvme_iov_md": false 00:30:47.597 }, 00:30:47.597 "memory_domains": [ 00:30:47.597 { 00:30:47.597 "dma_device_id": "system", 00:30:47.597 "dma_device_type": 1 00:30:47.597 } 00:30:47.597 ], 00:30:47.597 "driver_specific": { 00:30:47.597 "nvme": [ 00:30:47.597 { 00:30:47.597 "trid": { 00:30:47.597 "trtype": "TCP", 00:30:47.597 "adrfam": "IPv4", 00:30:47.597 "traddr": "10.0.0.2", 00:30:47.597 "trsvcid": "4420", 00:30:47.597 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:47.597 }, 00:30:47.597 "ctrlr_data": { 00:30:47.597 "cntlid": 1, 00:30:47.597 "vendor_id": "0x8086", 00:30:47.597 "model_number": "SPDK bdev Controller", 00:30:47.597 "serial_number": "SPDK0", 00:30:47.597 "firmware_revision": "25.01", 00:30:47.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.597 "oacs": { 00:30:47.597 "security": 0, 00:30:47.597 "format": 0, 00:30:47.597 "firmware": 0, 00:30:47.597 "ns_manage": 0 00:30:47.597 }, 00:30:47.597 "multi_ctrlr": true, 00:30:47.597 "ana_reporting": false 00:30:47.597 }, 00:30:47.597 "vs": { 00:30:47.597 "nvme_version": "1.3" 00:30:47.597 }, 00:30:47.597 "ns_data": { 00:30:47.597 "id": 1, 00:30:47.597 "can_share": true 00:30:47.597 } 00:30:47.597 } 00:30:47.597 ], 00:30:47.597 "mp_policy": "active_passive" 00:30:47.597 } 00:30:47.597 } 00:30:47.597 ] 00:30:47.597 05:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1958373 00:30:47.597 05:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:47.597 05:52:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:47.857 Running I/O for 10 seconds... 00:30:48.793 Latency(us) 00:30:48.793 [2024-11-27T04:52:36.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.793 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:48.793 [2024-11-27T04:52:36.797Z] =================================================================================================================== 00:30:48.793 [2024-11-27T04:52:36.797Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:30:48.793 00:30:49.729 05:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:49.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.729 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:49.729 [2024-11-27T04:52:37.733Z] =================================================================================================================== 00:30:49.729 [2024-11-27T04:52:37.733Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:30:49.729 00:30:49.988 true 00:30:49.988 05:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:49.988 05:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:50.247 05:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:50.247 05:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:50.247 05:52:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1958373 00:30:50.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.815 Nvme0n1 : 3.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:50.815 [2024-11-27T04:52:38.819Z] =================================================================================================================== 00:30:50.815 [2024-11-27T04:52:38.819Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:30:50.815 00:30:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.759 Nvme0n1 : 4.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:30:51.759 [2024-11-27T04:52:39.763Z] =================================================================================================================== 00:30:51.759 [2024-11-27T04:52:39.763Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:30:51.759 00:30:52.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.697 Nvme0n1 : 5.00 23342.60 91.18 0.00 0.00 0.00 0.00 0.00 00:30:52.697 [2024-11-27T04:52:40.701Z] =================================================================================================================== 00:30:52.697 [2024-11-27T04:52:40.701Z] Total : 23342.60 91.18 0.00 0.00 0.00 0.00 0.00 00:30:52.697 00:30:54.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.074 Nvme0n1 : 6.00 23389.17 91.36 0.00 0.00 0.00 0.00 0.00 00:30:54.074 [2024-11-27T04:52:42.078Z] =================================================================================================================== 00:30:54.074 [2024-11-27T04:52:42.078Z] Total : 23389.17 91.36 0.00 0.00 0.00 0.00 0.00 00:30:54.074 00:30:55.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.011 Nvme0n1 : 7.00 23422.43 91.49 0.00 0.00 0.00 0.00 0.00 00:30:55.011 [2024-11-27T04:52:43.015Z] =================================================================================================================== 00:30:55.011 [2024-11-27T04:52:43.015Z] Total : 23422.43 91.49 0.00 0.00 0.00 0.00 0.00 00:30:55.011 00:30:55.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.948 Nvme0n1 : 8.00 23447.38 91.59 0.00 0.00 0.00 0.00 0.00 00:30:55.948 [2024-11-27T04:52:43.952Z] =================================================================================================================== 00:30:55.948 [2024-11-27T04:52:43.952Z] Total : 23447.38 91.59 0.00 0.00 0.00 0.00 0.00 00:30:55.948 00:30:56.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:56.886 Nvme0n1 : 9.00 23473.89 91.69 0.00 0.00 0.00 0.00 0.00 00:30:56.886 [2024-11-27T04:52:44.890Z] =================================================================================================================== 00:30:56.886 [2024-11-27T04:52:44.890Z] Total : 23473.89 91.69 0.00 0.00 0.00 0.00 0.00 00:30:56.886 00:30:57.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.823 Nvme0n1 : 10.00 23488.70 91.75 0.00 0.00 0.00 0.00 0.00 00:30:57.823 [2024-11-27T04:52:45.827Z] =================================================================================================================== 00:30:57.823 [2024-11-27T04:52:45.827Z] Total : 23488.70 91.75 0.00 0.00 0.00 0.00 0.00 00:30:57.823 00:30:57.823 00:30:57.823 Latency(us) 00:30:57.823 [2024-11-27T04:52:45.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.823 Nvme0n1 : 10.00 23492.48 91.77 0.00 0.00 5445.60 3464.05 25590.25 00:30:57.823 [2024-11-27T04:52:45.827Z] =================================================================================================================== 00:30:57.823 [2024-11-27T04:52:45.827Z] Total : 23492.48 91.77 0.00 0.00 5445.60 3464.05 25590.25 00:30:57.823 { 00:30:57.823 "results": [ 00:30:57.823 { 00:30:57.823 "job": "Nvme0n1", 00:30:57.823 "core_mask": "0x2", 00:30:57.823 "workload": "randwrite", 00:30:57.823 "status": "finished", 00:30:57.823 "queue_depth": 128, 00:30:57.823 "io_size": 4096, 00:30:57.823 "runtime": 10.003841, 00:30:57.823 "iops": 23492.47653976108, 00:30:57.823 "mibps": 91.76748648344171, 00:30:57.823 "io_failed": 0, 00:30:57.823 "io_timeout": 0, 00:30:57.823 "avg_latency_us": 5445.595638389849, 00:30:57.823 "min_latency_us": 3464.0457142857144, 00:30:57.823 "max_latency_us": 25590.24761904762 00:30:57.823 } 00:30:57.823 ], 00:30:57.823 "core_count": 1 00:30:57.823 } 00:30:57.823 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1958239 00:30:57.823 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1958239 ']' 00:30:57.823 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1958239 00:30:57.823 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:57.824 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.824 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1958239 00:30:57.824 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:57.824 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:57.824 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1958239' 00:30:57.824 killing process with pid 1958239 00:30:57.824 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1958239 00:30:57.824 Received shutdown signal, test time was about 10.000000 seconds 00:30:57.824 00:30:57.824 Latency(us) 00:30:57.824 [2024-11-27T04:52:45.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.824 [2024-11-27T04:52:45.828Z] =================================================================================================================== 00:30:57.824 [2024-11-27T04:52:45.828Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:57.824 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1958239 00:30:58.083 05:52:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:58.341 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:58.341 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:58.342 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:58.601 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:58.601 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:58.601 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:58.860 [2024-11-27 05:52:46.693989] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:58.860 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:59.119 request: 00:30:59.119 { 00:30:59.119 "uuid": "dad6e904-45f3-40a2-8ebd-7bb7a8086c9c", 00:30:59.119 "method": "bdev_lvol_get_lvstores", 00:30:59.119 "req_id": 1 00:30:59.119 } 00:30:59.119 Got JSON-RPC error response 00:30:59.119 response: 00:30:59.119 { 00:30:59.119 "code": -19, 00:30:59.119 "message": "No such device" 00:30:59.119 } 00:30:59.119 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:59.119 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:59.119 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:59.119 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:59.120 05:52:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:59.120 aio_bdev 00:30:59.378 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a456a71c-7be5-4f76-b98a-ecac12a392ef 00:30:59.379 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a456a71c-7be5-4f76-b98a-ecac12a392ef 00:30:59.379 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:59.379 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:59.379 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:59.379 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:59.379 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:59.379 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a456a71c-7be5-4f76-b98a-ecac12a392ef -t 2000 00:30:59.637 [ 00:30:59.637 { 00:30:59.637 "name": "a456a71c-7be5-4f76-b98a-ecac12a392ef", 00:30:59.637 "aliases": [ 00:30:59.637 "lvs/lvol" 00:30:59.637 ], 00:30:59.637 "product_name": "Logical Volume", 00:30:59.637 "block_size": 4096, 00:30:59.637 "num_blocks": 38912, 00:30:59.637 "uuid": "a456a71c-7be5-4f76-b98a-ecac12a392ef", 00:30:59.637 "assigned_rate_limits": { 00:30:59.637 "rw_ios_per_sec": 0, 00:30:59.637 "rw_mbytes_per_sec": 0, 00:30:59.637 "r_mbytes_per_sec": 0, 00:30:59.637 "w_mbytes_per_sec": 0 00:30:59.637 }, 00:30:59.637 "claimed": false, 00:30:59.637 "zoned": false, 00:30:59.637 "supported_io_types": { 00:30:59.637 "read": true, 00:30:59.637 "write": true, 00:30:59.637 "unmap": true, 00:30:59.637 "flush": false, 00:30:59.637 "reset": true, 00:30:59.637 "nvme_admin": false, 00:30:59.637 "nvme_io": false, 00:30:59.637 "nvme_io_md": false, 00:30:59.637 "write_zeroes": true, 00:30:59.637 "zcopy": false, 00:30:59.637 "get_zone_info": false, 00:30:59.637 "zone_management": false, 00:30:59.637 "zone_append": false, 00:30:59.637 "compare": false, 00:30:59.637 "compare_and_write": false, 00:30:59.637 "abort": false, 00:30:59.637 "seek_hole": true, 00:30:59.637 "seek_data": true, 00:30:59.637 "copy": false, 00:30:59.637 "nvme_iov_md": false 00:30:59.637 }, 00:30:59.637 "driver_specific": { 00:30:59.637 "lvol": { 00:30:59.637 "lvol_store_uuid": "dad6e904-45f3-40a2-8ebd-7bb7a8086c9c", 00:30:59.637 "base_bdev": "aio_bdev", 00:30:59.637 "thin_provision": false, 00:30:59.637 "num_allocated_clusters": 38, 00:30:59.637 "snapshot": false, 00:30:59.637 "clone": false, 00:30:59.637 "esnap_clone": false 00:30:59.637 } 00:30:59.637 } 00:30:59.637 } 00:30:59.637 ] 00:30:59.637 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:59.637 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:59.637 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:59.897 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:59.897 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:30:59.897 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:00.156 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:00.156 05:52:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a456a71c-7be5-4f76-b98a-ecac12a392ef 00:31:00.156 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dad6e904-45f3-40a2-8ebd-7bb7a8086c9c 00:31:00.415 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:00.675 00:31:00.675 real 0m15.626s 00:31:00.675 user 0m15.147s 00:31:00.675 sys 0m1.500s 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:00.675 ************************************ 00:31:00.675 END TEST lvs_grow_clean 00:31:00.675 ************************************ 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:00.675 ************************************ 00:31:00.675 START TEST lvs_grow_dirty 00:31:00.675 ************************************ 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:00.675 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:00.934 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:00.934 05:52:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:01.194 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:01.194 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:01.194 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:01.452 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:01.452 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:01.452 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 lvol 150 00:31:01.452 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=79550ea3-779e-4c87-b63f-1dd6d434eb9c 00:31:01.452 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:01.452 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:01.710 [2024-11-27 05:52:49.613810] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:01.710 [2024-11-27 05:52:49.613940] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:01.710 true 00:31:01.710 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:01.710 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:01.969 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:01.969 05:52:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:02.228 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 79550ea3-779e-4c87-b63f-1dd6d434eb9c 00:31:02.487 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.487 [2024-11-27 05:52:50.406338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.487 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1960817 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1960817 /var/tmp/bdevperf.sock 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1960817 ']' 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:02.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:02.747 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:02.747 [2024-11-27 05:52:50.665516] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:02.747 [2024-11-27 05:52:50.665568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1960817 ] 00:31:02.747 [2024-11-27 05:52:50.739091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.006 [2024-11-27 05:52:50.781246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.006 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.006 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:03.006 05:52:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:03.265 Nvme0n1 00:31:03.265 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:03.525 [ 00:31:03.525 { 00:31:03.525 "name": "Nvme0n1", 00:31:03.525 "aliases": [ 00:31:03.525 "79550ea3-779e-4c87-b63f-1dd6d434eb9c" 00:31:03.525 ], 00:31:03.525 "product_name": "NVMe disk", 00:31:03.525 "block_size": 4096, 00:31:03.525 "num_blocks": 38912, 00:31:03.525 "uuid": "79550ea3-779e-4c87-b63f-1dd6d434eb9c", 00:31:03.525 "numa_id": 1, 00:31:03.525 "assigned_rate_limits": { 00:31:03.525 "rw_ios_per_sec": 0, 00:31:03.525 "rw_mbytes_per_sec": 0, 00:31:03.525 "r_mbytes_per_sec": 0, 00:31:03.525 "w_mbytes_per_sec": 0 00:31:03.525 }, 00:31:03.525 "claimed": false, 00:31:03.525 "zoned": false, 00:31:03.525 "supported_io_types": { 00:31:03.525 "read": true, 00:31:03.525 "write": true, 00:31:03.525 "unmap": true, 00:31:03.525 "flush": true, 00:31:03.525 "reset": true, 00:31:03.525 "nvme_admin": true, 00:31:03.525 "nvme_io": true, 00:31:03.525 "nvme_io_md": false, 00:31:03.525 "write_zeroes": true, 00:31:03.525 "zcopy": false, 00:31:03.525 "get_zone_info": false, 00:31:03.525 "zone_management": false, 00:31:03.525 "zone_append": false, 00:31:03.525 "compare": true, 00:31:03.525 "compare_and_write": true, 00:31:03.525 "abort": true, 00:31:03.525 "seek_hole": false, 00:31:03.525 "seek_data": false, 00:31:03.525 "copy": true, 00:31:03.525 "nvme_iov_md": false 00:31:03.525 }, 00:31:03.525 "memory_domains": [ 00:31:03.525 { 00:31:03.525 "dma_device_id": "system", 00:31:03.525 "dma_device_type": 1 00:31:03.525 } 00:31:03.525 ], 00:31:03.526 "driver_specific": { 00:31:03.526 "nvme": [ 00:31:03.526 { 00:31:03.526 "trid": { 00:31:03.526 "trtype": "TCP", 00:31:03.526 "adrfam": "IPv4", 00:31:03.526 "traddr": "10.0.0.2", 00:31:03.526 "trsvcid": "4420", 00:31:03.526 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:03.526 }, 00:31:03.526 "ctrlr_data": { 00:31:03.526 "cntlid": 1, 00:31:03.526 "vendor_id": "0x8086", 00:31:03.526 "model_number": "SPDK bdev Controller", 00:31:03.526 "serial_number": "SPDK0", 00:31:03.526 "firmware_revision": "25.01", 00:31:03.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.526 "oacs": { 00:31:03.526 "security": 0, 00:31:03.526 "format": 0, 00:31:03.526 "firmware": 0, 00:31:03.526 "ns_manage": 0 00:31:03.526 }, 00:31:03.526 "multi_ctrlr": true, 00:31:03.526 "ana_reporting": false 00:31:03.526 }, 00:31:03.526 "vs": { 00:31:03.526 "nvme_version": "1.3" 00:31:03.526 }, 00:31:03.526 "ns_data": { 00:31:03.526 "id": 1, 00:31:03.526 "can_share": true 00:31:03.526 } 00:31:03.526 } 00:31:03.526 ], 00:31:03.526 "mp_policy": "active_passive" 00:31:03.526 } 00:31:03.526 } 00:31:03.526 ] 00:31:03.526 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1960962 00:31:03.526 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:03.526 05:52:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:03.786 Running I/O for 10 seconds... 00:31:04.723 Latency(us) 00:31:04.723 [2024-11-27T04:52:52.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.723 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:04.723 [2024-11-27T04:52:52.727Z] =================================================================================================================== 00:31:04.723 [2024-11-27T04:52:52.727Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:31:04.723 00:31:05.660 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:05.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.660 Nvme0n1 : 2.00 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:31:05.660 [2024-11-27T04:52:53.664Z] =================================================================================================================== 00:31:05.660 [2024-11-27T04:52:53.664Z] Total : 23019.00 89.92 0.00 0.00 0.00 0.00 0.00 00:31:05.660 00:31:05.660 true 00:31:05.918 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:05.918 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:05.918 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:05.918 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:05.918 05:52:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1960962 00:31:06.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.854 Nvme0n1 : 3.00 23088.00 90.19 0.00 0.00 0.00 0.00 0.00 00:31:06.854 [2024-11-27T04:52:54.858Z] =================================================================================================================== 00:31:06.854 [2024-11-27T04:52:54.858Z] Total : 23088.00 90.19 0.00 0.00 0.00 0.00 0.00 00:31:06.854 00:31:07.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.791 Nvme0n1 : 4.00 23189.75 90.58 0.00 0.00 0.00 0.00 0.00 00:31:07.791 [2024-11-27T04:52:55.795Z] =================================================================================================================== 00:31:07.791 [2024-11-27T04:52:55.795Z] Total : 23189.75 90.58 0.00 0.00 0.00 0.00 0.00 00:31:07.791 00:31:08.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.728 Nvme0n1 : 5.00 23250.80 90.82 0.00 0.00 0.00 0.00 0.00 00:31:08.728 [2024-11-27T04:52:56.732Z] =================================================================================================================== 00:31:08.728 [2024-11-27T04:52:56.732Z] Total : 23250.80 90.82 0.00 0.00 0.00 0.00 0.00 00:31:08.728 00:31:09.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.664 Nvme0n1 : 6.00 23302.17 91.02 0.00 0.00 0.00 0.00 0.00 00:31:09.664 [2024-11-27T04:52:57.668Z] =================================================================================================================== 00:31:09.664 [2024-11-27T04:52:57.668Z] Total : 23302.17 91.02 0.00 0.00 0.00 0.00 0.00 00:31:09.664 00:31:10.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.601 Nvme0n1 : 7.00 23347.86 91.20 0.00 0.00 0.00 0.00 0.00 00:31:10.601 [2024-11-27T04:52:58.605Z] =================================================================================================================== 00:31:10.601 [2024-11-27T04:52:58.605Z] Total : 23347.86 91.20 0.00 0.00 0.00 0.00 0.00 00:31:10.601 00:31:11.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.978 Nvme0n1 : 8.00 23366.25 91.27 0.00 0.00 0.00 0.00 0.00 00:31:11.978 [2024-11-27T04:52:59.982Z] =================================================================================================================== 00:31:11.978 [2024-11-27T04:52:59.982Z] Total : 23366.25 91.27 0.00 0.00 0.00 0.00 0.00 00:31:11.978 00:31:12.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.915 Nvme0n1 : 9.00 23356.11 91.23 0.00 0.00 0.00 0.00 0.00 00:31:12.915 [2024-11-27T04:53:00.919Z] =================================================================================================================== 00:31:12.915 [2024-11-27T04:53:00.919Z] Total : 23356.11 91.23 0.00 0.00 0.00 0.00 0.00 00:31:12.915 00:31:13.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.852 Nvme0n1 : 10.00 23382.70 91.34 0.00 0.00 0.00 0.00 0.00 00:31:13.852 [2024-11-27T04:53:01.856Z] =================================================================================================================== 00:31:13.852 [2024-11-27T04:53:01.857Z] Total : 23382.70 91.34 0.00 0.00 0.00 0.00 0.00 00:31:13.853 00:31:13.853 00:31:13.853 Latency(us) 00:31:13.853 [2024-11-27T04:53:01.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.853 Nvme0n1 : 10.01 23381.36 91.33 0.00 0.00 5471.38 3167.57 25090.93 00:31:13.853 [2024-11-27T04:53:01.857Z] =================================================================================================================== 00:31:13.853 [2024-11-27T04:53:01.857Z] Total : 23381.36 91.33 0.00 0.00 5471.38 3167.57 25090.93 00:31:13.853 { 00:31:13.853 "results": [ 00:31:13.853 { 00:31:13.853 "job": "Nvme0n1", 00:31:13.853 "core_mask": "0x2", 00:31:13.853 "workload": "randwrite", 00:31:13.853 "status": "finished", 00:31:13.853 "queue_depth": 128, 00:31:13.853 "io_size": 4096, 00:31:13.853 "runtime": 10.006047, 00:31:13.853 "iops": 23381.361290827437, 00:31:13.853 "mibps": 91.33344254229468, 00:31:13.853 "io_failed": 0, 00:31:13.853 "io_timeout": 0, 00:31:13.853 "avg_latency_us": 5471.378116890611, 00:31:13.853 "min_latency_us": 3167.5733333333333, 00:31:13.853 "max_latency_us": 25090.925714285713 00:31:13.853 } 00:31:13.853 ], 00:31:13.853 "core_count": 1 00:31:13.853 } 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1960817 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1960817 ']' 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1960817 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1960817 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1960817' 00:31:13.853 killing process with pid 1960817 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1960817 00:31:13.853 Received shutdown signal, test time was about 10.000000 seconds 00:31:13.853 00:31:13.853 Latency(us) 00:31:13.853 [2024-11-27T04:53:01.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.853 [2024-11-27T04:53:01.857Z] =================================================================================================================== 00:31:13.853 [2024-11-27T04:53:01.857Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1960817 00:31:13.853 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.112 05:53:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.371 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:14.371 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1957741 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1957741 00:31:14.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1957741 Killed "${NVMF_APP[@]}" "$@" 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1962659 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1962659 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1962659 ']' 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.631 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.631 [2024-11-27 05:53:02.472272] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.631 [2024-11-27 05:53:02.473222] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:14.631 [2024-11-27 05:53:02.473259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.631 [2024-11-27 05:53:02.553556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.631 [2024-11-27 05:53:02.593743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.631 [2024-11-27 05:53:02.593780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.631 [2024-11-27 05:53:02.593788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.631 [2024-11-27 05:53:02.593794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.631 [2024-11-27 05:53:02.593799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.631 [2024-11-27 05:53:02.594343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.890 [2024-11-27 05:53:02.663135] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.890 [2024-11-27 05:53:02.663381] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.890 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.890 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:14.890 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.890 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.890 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:14.890 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.891 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:15.150 [2024-11-27 05:53:02.903795] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:15.150 [2024-11-27 05:53:02.904011] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:15.150 [2024-11-27 05:53:02.904095] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 79550ea3-779e-4c87-b63f-1dd6d434eb9c 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=79550ea3-779e-4c87-b63f-1dd6d434eb9c 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:15.150 05:53:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:15.150 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 79550ea3-779e-4c87-b63f-1dd6d434eb9c -t 2000 00:31:15.409 [ 00:31:15.409 { 00:31:15.409 "name": "79550ea3-779e-4c87-b63f-1dd6d434eb9c", 00:31:15.409 "aliases": [ 00:31:15.409 "lvs/lvol" 00:31:15.409 ], 00:31:15.409 "product_name": "Logical Volume", 00:31:15.409 "block_size": 4096, 00:31:15.409 "num_blocks": 38912, 00:31:15.409 "uuid": "79550ea3-779e-4c87-b63f-1dd6d434eb9c", 00:31:15.409 "assigned_rate_limits": { 00:31:15.409 "rw_ios_per_sec": 0, 00:31:15.409 "rw_mbytes_per_sec": 0, 00:31:15.409 "r_mbytes_per_sec": 0, 00:31:15.409 "w_mbytes_per_sec": 0 00:31:15.409 }, 00:31:15.409 "claimed": false, 00:31:15.409 "zoned": false, 00:31:15.409 "supported_io_types": { 00:31:15.409 "read": true, 00:31:15.409 "write": true, 00:31:15.409 "unmap": true, 00:31:15.409 "flush": false, 00:31:15.409 "reset": true, 00:31:15.409 "nvme_admin": false, 00:31:15.409 "nvme_io": false, 00:31:15.409 "nvme_io_md": false, 00:31:15.409 "write_zeroes": true, 00:31:15.409 "zcopy": false, 00:31:15.409 "get_zone_info": false, 00:31:15.409 "zone_management": false, 00:31:15.409 "zone_append": false, 00:31:15.409 "compare": false, 00:31:15.409 "compare_and_write": false, 00:31:15.409 "abort": false, 00:31:15.409 "seek_hole": true, 00:31:15.409 "seek_data": true, 00:31:15.409 "copy": false, 00:31:15.409 "nvme_iov_md": false 00:31:15.409 }, 00:31:15.409 "driver_specific": { 00:31:15.409 "lvol": { 00:31:15.409 "lvol_store_uuid": "9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169", 00:31:15.409 "base_bdev": "aio_bdev", 00:31:15.409 "thin_provision": false, 00:31:15.409 "num_allocated_clusters": 38, 00:31:15.409 "snapshot": false, 00:31:15.409 "clone": false, 00:31:15.409 "esnap_clone": false 00:31:15.409 } 00:31:15.409 } 00:31:15.409 } 00:31:15.409 ] 00:31:15.409 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:15.409 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:15.409 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:15.668 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:15.668 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:15.668 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:15.928 [2024-11-27 05:53:03.870838] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:15.928 05:53:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:16.187 request: 00:31:16.187 { 00:31:16.187 "uuid": "9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169", 00:31:16.187 "method": "bdev_lvol_get_lvstores", 00:31:16.187 "req_id": 1 00:31:16.187 } 00:31:16.187 Got JSON-RPC error response 00:31:16.187 response: 00:31:16.187 { 00:31:16.187 "code": -19, 00:31:16.187 "message": "No such device" 00:31:16.187 } 00:31:16.187 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:16.187 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:16.187 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:16.187 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:16.187 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:16.446 aio_bdev 00:31:16.446 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 79550ea3-779e-4c87-b63f-1dd6d434eb9c 00:31:16.446 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=79550ea3-779e-4c87-b63f-1dd6d434eb9c 00:31:16.446 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:16.446 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:16.446 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:16.446 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:16.446 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:16.705 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 79550ea3-779e-4c87-b63f-1dd6d434eb9c -t 2000 00:31:16.705 [ 00:31:16.705 { 00:31:16.705 "name": "79550ea3-779e-4c87-b63f-1dd6d434eb9c", 00:31:16.705 "aliases": [ 00:31:16.705 "lvs/lvol" 00:31:16.705 ], 00:31:16.705 "product_name": "Logical Volume", 00:31:16.705 "block_size": 4096, 00:31:16.705 "num_blocks": 38912, 00:31:16.705 "uuid": "79550ea3-779e-4c87-b63f-1dd6d434eb9c", 00:31:16.705 "assigned_rate_limits": { 00:31:16.705 "rw_ios_per_sec": 0, 00:31:16.705 "rw_mbytes_per_sec": 0, 00:31:16.705 "r_mbytes_per_sec": 0, 00:31:16.705 "w_mbytes_per_sec": 0 00:31:16.705 }, 00:31:16.705 "claimed": false, 00:31:16.705 "zoned": false, 00:31:16.705 "supported_io_types": { 00:31:16.705 "read": true, 00:31:16.705 "write": true, 00:31:16.705 "unmap": true, 00:31:16.705 "flush": false, 00:31:16.705 "reset": true, 00:31:16.705 "nvme_admin": false, 00:31:16.705 "nvme_io": false, 00:31:16.705 "nvme_io_md": false, 00:31:16.705 "write_zeroes": true, 00:31:16.705 "zcopy": false, 00:31:16.705 "get_zone_info": false, 00:31:16.705 "zone_management": false, 00:31:16.705 "zone_append": false, 00:31:16.705 "compare": false, 00:31:16.705 "compare_and_write": false, 00:31:16.705 "abort": false, 00:31:16.705 "seek_hole": true, 00:31:16.705 "seek_data": true, 00:31:16.705 "copy": false, 00:31:16.705 "nvme_iov_md": false 00:31:16.705 }, 00:31:16.705 "driver_specific": { 00:31:16.705 "lvol": { 00:31:16.705 "lvol_store_uuid": "9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169", 00:31:16.705 "base_bdev": "aio_bdev", 00:31:16.705 "thin_provision": false, 00:31:16.705 "num_allocated_clusters": 38, 00:31:16.705 "snapshot": false, 00:31:16.705 "clone": false, 00:31:16.705 "esnap_clone": false 00:31:16.705 } 00:31:16.705 } 00:31:16.705 } 00:31:16.705 ] 00:31:16.705 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:16.705 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:16.706 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:16.965 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:16.965 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:16.965 05:53:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:17.223 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:17.223 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 79550ea3-779e-4c87-b63f-1dd6d434eb9c 00:31:17.482 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b9ceb12-24dc-4dd9-a8a3-2a87f5cc7169 00:31:17.741 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:17.741 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:17.741 00:31:17.741 real 0m17.092s 00:31:17.741 user 0m33.523s 00:31:17.741 sys 0m4.833s 00:31:17.741 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.741 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:17.741 ************************************ 00:31:17.741 END TEST lvs_grow_dirty 00:31:17.741 ************************************ 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:18.000 nvmf_trace.0 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.000 rmmod nvme_tcp 00:31:18.000 rmmod nvme_fabrics 00:31:18.000 rmmod nvme_keyring 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1962659 ']' 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1962659 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1962659 ']' 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1962659 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962659 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962659' 00:31:18.000 killing process with pid 1962659 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1962659 00:31:18.000 05:53:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1962659 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.259 05:53:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.796 00:31:20.796 real 0m41.912s 00:31:20.796 user 0m51.103s 00:31:20.796 sys 0m11.305s 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:20.796 ************************************ 00:31:20.796 END TEST nvmf_lvs_grow 00:31:20.796 ************************************ 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.796 ************************************ 00:31:20.796 START TEST nvmf_bdev_io_wait 00:31:20.796 ************************************ 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:20.796 * Looking for test storage... 00:31:20.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.796 --rc genhtml_branch_coverage=1 00:31:20.796 --rc genhtml_function_coverage=1 00:31:20.796 --rc genhtml_legend=1 00:31:20.796 --rc geninfo_all_blocks=1 00:31:20.796 --rc geninfo_unexecuted_blocks=1 00:31:20.796 00:31:20.796 ' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.796 --rc genhtml_branch_coverage=1 00:31:20.796 --rc genhtml_function_coverage=1 00:31:20.796 --rc genhtml_legend=1 00:31:20.796 --rc geninfo_all_blocks=1 00:31:20.796 --rc geninfo_unexecuted_blocks=1 00:31:20.796 00:31:20.796 ' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.796 --rc genhtml_branch_coverage=1 00:31:20.796 --rc genhtml_function_coverage=1 00:31:20.796 --rc genhtml_legend=1 00:31:20.796 --rc geninfo_all_blocks=1 00:31:20.796 --rc geninfo_unexecuted_blocks=1 00:31:20.796 00:31:20.796 ' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:20.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.796 --rc genhtml_branch_coverage=1 00:31:20.796 --rc genhtml_function_coverage=1 00:31:20.796 --rc genhtml_legend=1 00:31:20.796 --rc geninfo_all_blocks=1 00:31:20.796 --rc geninfo_unexecuted_blocks=1 00:31:20.796 00:31:20.796 ' 00:31:20.796 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.797 05:53:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:26.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:26.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.087 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:26.088 Found net devices under 0000:86:00.0: cvl_0_0 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:26.088 Found net devices under 0000:86:00.1: cvl_0_1 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.088 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:31:26.347 00:31:26.347 --- 10.0.0.2 ping statistics --- 00:31:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.347 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:31:26.347 00:31:26.347 --- 10.0.0.1 ping statistics --- 00:31:26.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.347 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:26.347 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1966717 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1966717 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1966717 ']' 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.607 05:53:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:26.607 [2024-11-27 05:53:14.427542] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:26.607 [2024-11-27 05:53:14.428461] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:26.607 [2024-11-27 05:53:14.428499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.607 [2024-11-27 05:53:14.508188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.607 [2024-11-27 05:53:14.551200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.607 [2024-11-27 05:53:14.551240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.607 [2024-11-27 05:53:14.551246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.607 [2024-11-27 05:53:14.551252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.607 [2024-11-27 05:53:14.551258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.607 [2024-11-27 05:53:14.552787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.607 [2024-11-27 05:53:14.552821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.607 [2024-11-27 05:53:14.552948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.607 [2024-11-27 05:53:14.552949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.607 [2024-11-27 05:53:14.553345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.545 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.545 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:27.545 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.545 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.545 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 [2024-11-27 05:53:15.380661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:27.546 [2024-11-27 05:53:15.380751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:27.546 [2024-11-27 05:53:15.381248] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:27.546 [2024-11-27 05:53:15.381273] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 [2024-11-27 05:53:15.393737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 Malloc0 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:27.546 [2024-11-27 05:53:15.466035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1966957 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1966959 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.546 { 00:31:27.546 "params": { 00:31:27.546 "name": "Nvme$subsystem", 00:31:27.546 "trtype": "$TEST_TRANSPORT", 00:31:27.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.546 "adrfam": "ipv4", 00:31:27.546 "trsvcid": "$NVMF_PORT", 00:31:27.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.546 "hdgst": ${hdgst:-false}, 00:31:27.546 "ddgst": ${ddgst:-false} 00:31:27.546 }, 00:31:27.546 "method": "bdev_nvme_attach_controller" 00:31:27.546 } 00:31:27.546 EOF 00:31:27.546 )") 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1966961 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.546 { 00:31:27.546 "params": { 00:31:27.546 "name": "Nvme$subsystem", 00:31:27.546 "trtype": "$TEST_TRANSPORT", 00:31:27.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.546 "adrfam": "ipv4", 00:31:27.546 "trsvcid": "$NVMF_PORT", 00:31:27.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.546 "hdgst": ${hdgst:-false}, 00:31:27.546 "ddgst": ${ddgst:-false} 00:31:27.546 }, 00:31:27.546 "method": "bdev_nvme_attach_controller" 00:31:27.546 } 00:31:27.546 EOF 00:31:27.546 )") 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1966964 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.546 { 00:31:27.546 "params": { 00:31:27.546 "name": "Nvme$subsystem", 00:31:27.546 "trtype": "$TEST_TRANSPORT", 00:31:27.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.546 "adrfam": "ipv4", 00:31:27.546 "trsvcid": "$NVMF_PORT", 00:31:27.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.546 "hdgst": ${hdgst:-false}, 00:31:27.546 "ddgst": ${ddgst:-false} 00:31:27.546 }, 00:31:27.546 "method": "bdev_nvme_attach_controller" 00:31:27.546 } 00:31:27.546 EOF 00:31:27.546 )") 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:27.546 { 00:31:27.546 "params": { 00:31:27.546 "name": "Nvme$subsystem", 00:31:27.546 "trtype": "$TEST_TRANSPORT", 00:31:27.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.546 "adrfam": "ipv4", 00:31:27.546 "trsvcid": "$NVMF_PORT", 00:31:27.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.546 "hdgst": ${hdgst:-false}, 00:31:27.546 "ddgst": ${ddgst:-false} 00:31:27.546 }, 00:31:27.546 "method": "bdev_nvme_attach_controller" 00:31:27.546 } 00:31:27.546 EOF 00:31:27.546 )") 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1966957 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:27.546 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.547 "params": { 00:31:27.547 "name": "Nvme1", 00:31:27.547 "trtype": "tcp", 00:31:27.547 "traddr": "10.0.0.2", 00:31:27.547 "adrfam": "ipv4", 00:31:27.547 "trsvcid": "4420", 00:31:27.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.547 "hdgst": false, 00:31:27.547 "ddgst": false 00:31:27.547 }, 00:31:27.547 "method": "bdev_nvme_attach_controller" 00:31:27.547 }' 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.547 "params": { 00:31:27.547 "name": "Nvme1", 00:31:27.547 "trtype": "tcp", 00:31:27.547 "traddr": "10.0.0.2", 00:31:27.547 "adrfam": "ipv4", 00:31:27.547 "trsvcid": "4420", 00:31:27.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.547 "hdgst": false, 00:31:27.547 "ddgst": false 00:31:27.547 }, 00:31:27.547 "method": "bdev_nvme_attach_controller" 00:31:27.547 }' 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.547 "params": { 00:31:27.547 "name": "Nvme1", 00:31:27.547 "trtype": "tcp", 00:31:27.547 "traddr": "10.0.0.2", 00:31:27.547 "adrfam": "ipv4", 00:31:27.547 "trsvcid": "4420", 00:31:27.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.547 "hdgst": false, 00:31:27.547 "ddgst": false 00:31:27.547 }, 00:31:27.547 "method": "bdev_nvme_attach_controller" 00:31:27.547 }' 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:27.547 05:53:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:27.547 "params": { 00:31:27.547 "name": "Nvme1", 00:31:27.547 "trtype": "tcp", 00:31:27.547 "traddr": "10.0.0.2", 00:31:27.547 "adrfam": "ipv4", 00:31:27.547 "trsvcid": "4420", 00:31:27.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:27.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:27.547 "hdgst": false, 00:31:27.547 "ddgst": false 00:31:27.547 }, 00:31:27.547 "method": "bdev_nvme_attach_controller" 00:31:27.547 }' 00:31:27.547 [2024-11-27 05:53:15.519759] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:27.547 [2024-11-27 05:53:15.519814] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:27.547 [2024-11-27 05:53:15.521344] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:27.547 [2024-11-27 05:53:15.521391] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:27.547 [2024-11-27 05:53:15.521916] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:27.547 [2024-11-27 05:53:15.521928] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:27.547 [2024-11-27 05:53:15.521964] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:27.547 [2024-11-27 05:53:15.521969] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:27.806 [2024-11-27 05:53:15.705836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.806 [2024-11-27 05:53:15.748312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:27.806 [2024-11-27 05:53:15.805114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.065 [2024-11-27 05:53:15.845449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:28.065 [2024-11-27 05:53:15.898530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.065 [2024-11-27 05:53:15.946086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:28.065 [2024-11-27 05:53:15.958509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.065 [2024-11-27 05:53:16.001257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:28.065 Running I/O for 1 seconds... 00:31:28.324 Running I/O for 1 seconds... 00:31:28.324 Running I/O for 1 seconds... 00:31:28.324 Running I/O for 1 seconds... 00:31:29.261 14254.00 IOPS, 55.68 MiB/s 00:31:29.261 Latency(us) 00:31:29.261 [2024-11-27T04:53:17.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.261 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:29.261 Nvme1n1 : 1.01 14298.14 55.85 0.00 0.00 8924.25 3386.03 10548.18 00:31:29.261 [2024-11-27T04:53:17.265Z] =================================================================================================================== 00:31:29.261 [2024-11-27T04:53:17.265Z] Total : 14298.14 55.85 0.00 0.00 8924.25 3386.03 10548.18 00:31:29.261 6888.00 IOPS, 26.91 MiB/s 00:31:29.261 Latency(us) 00:31:29.261 [2024-11-27T04:53:17.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.261 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:29.261 Nvme1n1 : 1.01 6927.52 27.06 0.00 0.00 18349.57 4462.69 23468.13 00:31:29.261 [2024-11-27T04:53:17.265Z] =================================================================================================================== 00:31:29.261 [2024-11-27T04:53:17.265Z] Total : 6927.52 27.06 0.00 0.00 18349.57 4462.69 23468.13 00:31:29.261 244264.00 IOPS, 954.16 MiB/s 00:31:29.261 Latency(us) 00:31:29.261 [2024-11-27T04:53:17.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.261 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:29.261 Nvme1n1 : 1.00 243901.03 952.74 0.00 0.00 521.89 221.38 1490.16 00:31:29.261 [2024-11-27T04:53:17.265Z] =================================================================================================================== 00:31:29.261 [2024-11-27T04:53:17.265Z] Total : 243901.03 952.74 0.00 0.00 521.89 221.38 1490.16 00:31:29.261 7100.00 IOPS, 27.73 MiB/s 00:31:29.261 Latency(us) 00:31:29.261 [2024-11-27T04:53:17.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.261 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:29.261 Nvme1n1 : 1.00 7208.33 28.16 0.00 0.00 17716.51 3183.18 35202.19 00:31:29.261 [2024-11-27T04:53:17.265Z] =================================================================================================================== 00:31:29.261 [2024-11-27T04:53:17.265Z] Total : 7208.33 28.16 0.00 0.00 17716.51 3183.18 35202.19 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1966959 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1966961 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1966964 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:29.520 rmmod nvme_tcp 00:31:29.520 rmmod nvme_fabrics 00:31:29.520 rmmod nvme_keyring 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1966717 ']' 00:31:29.520 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1966717 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1966717 ']' 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1966717 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966717 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966717' 00:31:29.521 killing process with pid 1966717 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1966717 00:31:29.521 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1966717 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.780 05:53:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:31.686 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:31.686 00:31:31.686 real 0m11.418s 00:31:31.686 user 0m14.972s 00:31:31.686 sys 0m6.510s 00:31:31.686 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.686 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:31.686 ************************************ 00:31:31.686 END TEST nvmf_bdev_io_wait 00:31:31.686 ************************************ 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:31.945 ************************************ 00:31:31.945 START TEST nvmf_queue_depth 00:31:31.945 ************************************ 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:31.945 * Looking for test storage... 00:31:31.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:31.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.945 --rc genhtml_branch_coverage=1 00:31:31.945 --rc genhtml_function_coverage=1 00:31:31.945 --rc genhtml_legend=1 00:31:31.945 --rc geninfo_all_blocks=1 00:31:31.945 --rc geninfo_unexecuted_blocks=1 00:31:31.945 00:31:31.945 ' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:31.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.945 --rc genhtml_branch_coverage=1 00:31:31.945 --rc genhtml_function_coverage=1 00:31:31.945 --rc genhtml_legend=1 00:31:31.945 --rc geninfo_all_blocks=1 00:31:31.945 --rc geninfo_unexecuted_blocks=1 00:31:31.945 00:31:31.945 ' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:31.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.945 --rc genhtml_branch_coverage=1 00:31:31.945 --rc genhtml_function_coverage=1 00:31:31.945 --rc genhtml_legend=1 00:31:31.945 --rc geninfo_all_blocks=1 00:31:31.945 --rc geninfo_unexecuted_blocks=1 00:31:31.945 00:31:31.945 ' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:31.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:31.945 --rc genhtml_branch_coverage=1 00:31:31.945 --rc genhtml_function_coverage=1 00:31:31.945 --rc genhtml_legend=1 00:31:31.945 --rc geninfo_all_blocks=1 00:31:31.945 --rc geninfo_unexecuted_blocks=1 00:31:31.945 00:31:31.945 ' 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:31.945 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.946 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.946 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.946 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:31.946 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.946 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:31.946 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:32.205 05:53:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:38.775 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:38.776 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:38.776 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:38.776 Found net devices under 0000:86:00.0: cvl_0_0 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:38.776 Found net devices under 0000:86:00.1: cvl_0_1 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:38.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:38.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:31:38.776 00:31:38.776 --- 10.0.0.2 ping statistics --- 00:31:38.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.776 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:38.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:38.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:31:38.776 00:31:38.776 --- 10.0.0.1 ping statistics --- 00:31:38.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:38.776 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1970737 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1970737 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1970737 ']' 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.776 05:53:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.776 [2024-11-27 05:53:25.897586] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:38.777 [2024-11-27 05:53:25.898541] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:38.777 [2024-11-27 05:53:25.898575] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.777 [2024-11-27 05:53:25.978658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.777 [2024-11-27 05:53:26.018912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.777 [2024-11-27 05:53:26.018959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.777 [2024-11-27 05:53:26.018966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.777 [2024-11-27 05:53:26.018972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.777 [2024-11-27 05:53:26.018977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.777 [2024-11-27 05:53:26.019545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.777 [2024-11-27 05:53:26.086810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:38.777 [2024-11-27 05:53:26.087025] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 [2024-11-27 05:53:26.160206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 Malloc0 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 [2024-11-27 05:53:26.236345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1970870 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1970870 /var/tmp/bdevperf.sock 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1970870 ']' 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:38.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 [2024-11-27 05:53:26.287976] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:31:38.777 [2024-11-27 05:53:26.288018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970870 ] 00:31:38.777 [2024-11-27 05:53:26.362981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.777 [2024-11-27 05:53:26.405252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:38.777 NVMe0n1 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:38.777 05:53:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:39.036 Running I/O for 10 seconds... 00:31:41.048 11517.00 IOPS, 44.99 MiB/s [2024-11-27T04:53:29.989Z] 12149.00 IOPS, 47.46 MiB/s [2024-11-27T04:53:30.924Z] 12197.67 IOPS, 47.65 MiB/s [2024-11-27T04:53:31.859Z] 12274.25 IOPS, 47.95 MiB/s [2024-11-27T04:53:33.234Z] 12288.80 IOPS, 48.00 MiB/s [2024-11-27T04:53:34.166Z] 12296.67 IOPS, 48.03 MiB/s [2024-11-27T04:53:35.102Z] 12382.00 IOPS, 48.37 MiB/s [2024-11-27T04:53:36.037Z] 12389.38 IOPS, 48.40 MiB/s [2024-11-27T04:53:36.973Z] 12403.56 IOPS, 48.45 MiB/s [2024-11-27T04:53:36.973Z] 12392.80 IOPS, 48.41 MiB/s 00:31:48.969 Latency(us) 00:31:48.969 [2024-11-27T04:53:36.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.969 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:48.969 Verification LBA range: start 0x0 length 0x4000 00:31:48.969 NVMe0n1 : 10.05 12431.18 48.56 0.00 0.00 82119.51 12670.29 55674.39 00:31:48.969 [2024-11-27T04:53:36.973Z] =================================================================================================================== 00:31:48.969 [2024-11-27T04:53:36.973Z] Total : 12431.18 48.56 0.00 0.00 82119.51 12670.29 55674.39 00:31:48.969 { 00:31:48.969 "results": [ 00:31:48.969 { 00:31:48.969 "job": "NVMe0n1", 00:31:48.969 "core_mask": "0x1", 00:31:48.969 "workload": "verify", 00:31:48.969 "status": "finished", 00:31:48.970 "verify_range": { 00:31:48.970 "start": 0, 00:31:48.970 "length": 16384 00:31:48.970 }, 00:31:48.970 "queue_depth": 1024, 00:31:48.970 "io_size": 4096, 00:31:48.970 "runtime": 10.051497, 00:31:48.970 "iops": 12431.183136203494, 00:31:48.970 "mibps": 48.5593091257949, 00:31:48.970 "io_failed": 0, 00:31:48.970 "io_timeout": 0, 00:31:48.970 "avg_latency_us": 82119.51002484765, 00:31:48.970 "min_latency_us": 12670.293333333333, 00:31:48.970 "max_latency_us": 55674.392380952384 00:31:48.970 } 00:31:48.970 ], 00:31:48.970 "core_count": 1 00:31:48.970 } 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1970870 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1970870 ']' 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1970870 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970870 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970870' 00:31:48.970 killing process with pid 1970870 00:31:48.970 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1970870 00:31:48.970 Received shutdown signal, test time was about 10.000000 seconds 00:31:48.970 00:31:48.970 Latency(us) 00:31:48.970 [2024-11-27T04:53:36.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.970 [2024-11-27T04:53:36.974Z] =================================================================================================================== 00:31:48.970 [2024-11-27T04:53:36.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:49.228 05:53:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1970870 00:31:49.228 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:49.228 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:49.229 rmmod nvme_tcp 00:31:49.229 rmmod nvme_fabrics 00:31:49.229 rmmod nvme_keyring 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1970737 ']' 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1970737 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1970737 ']' 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1970737 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:49.229 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1970737 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1970737' 00:31:49.488 killing process with pid 1970737 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1970737 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1970737 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.488 05:53:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.022 00:31:52.022 real 0m19.745s 00:31:52.022 user 0m22.833s 00:31:52.022 sys 0m6.285s 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:52.022 ************************************ 00:31:52.022 END TEST nvmf_queue_depth 00:31:52.022 ************************************ 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.022 ************************************ 00:31:52.022 START TEST nvmf_target_multipath 00:31:52.022 ************************************ 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:52.022 * Looking for test storage... 00:31:52.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.022 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:52.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.023 --rc genhtml_branch_coverage=1 00:31:52.023 --rc genhtml_function_coverage=1 00:31:52.023 --rc genhtml_legend=1 00:31:52.023 --rc geninfo_all_blocks=1 00:31:52.023 --rc geninfo_unexecuted_blocks=1 00:31:52.023 00:31:52.023 ' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:52.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.023 --rc genhtml_branch_coverage=1 00:31:52.023 --rc genhtml_function_coverage=1 00:31:52.023 --rc genhtml_legend=1 00:31:52.023 --rc geninfo_all_blocks=1 00:31:52.023 --rc geninfo_unexecuted_blocks=1 00:31:52.023 00:31:52.023 ' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:52.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.023 --rc genhtml_branch_coverage=1 00:31:52.023 --rc genhtml_function_coverage=1 00:31:52.023 --rc genhtml_legend=1 00:31:52.023 --rc geninfo_all_blocks=1 00:31:52.023 --rc geninfo_unexecuted_blocks=1 00:31:52.023 00:31:52.023 ' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:52.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.023 --rc genhtml_branch_coverage=1 00:31:52.023 --rc genhtml_function_coverage=1 00:31:52.023 --rc genhtml_legend=1 00:31:52.023 --rc geninfo_all_blocks=1 00:31:52.023 --rc geninfo_unexecuted_blocks=1 00:31:52.023 00:31:52.023 ' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.023 05:53:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:58.596 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:58.596 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:58.596 Found net devices under 0000:86:00.0: cvl_0_0 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:58.596 Found net devices under 0000:86:00.1: cvl_0_1 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:58.596 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:58.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:58.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:31:58.597 00:31:58.597 --- 10.0.0.2 ping statistics --- 00:31:58.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.597 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:58.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:58.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:31:58.597 00:31:58.597 --- 10.0.0.1 ping statistics --- 00:31:58.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:58.597 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:58.597 only one NIC for nvmf test 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.597 rmmod nvme_tcp 00:31:58.597 rmmod nvme_fabrics 00:31:58.597 rmmod nvme_keyring 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.597 05:53:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.976 00:31:59.976 real 0m8.297s 00:31:59.976 user 0m1.761s 00:31:59.976 sys 0m4.531s 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:59.976 ************************************ 00:31:59.976 END TEST nvmf_target_multipath 00:31:59.976 ************************************ 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.976 ************************************ 00:31:59.976 START TEST nvmf_zcopy 00:31:59.976 ************************************ 00:31:59.976 05:53:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:00.236 * Looking for test storage... 00:32:00.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.236 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:00.236 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:00.236 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:00.236 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:00.236 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:00.236 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:00.236 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:00.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.237 --rc genhtml_branch_coverage=1 00:32:00.237 --rc genhtml_function_coverage=1 00:32:00.237 --rc genhtml_legend=1 00:32:00.237 --rc geninfo_all_blocks=1 00:32:00.237 --rc geninfo_unexecuted_blocks=1 00:32:00.237 00:32:00.237 ' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:00.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.237 --rc genhtml_branch_coverage=1 00:32:00.237 --rc genhtml_function_coverage=1 00:32:00.237 --rc genhtml_legend=1 00:32:00.237 --rc geninfo_all_blocks=1 00:32:00.237 --rc geninfo_unexecuted_blocks=1 00:32:00.237 00:32:00.237 ' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:00.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.237 --rc genhtml_branch_coverage=1 00:32:00.237 --rc genhtml_function_coverage=1 00:32:00.237 --rc genhtml_legend=1 00:32:00.237 --rc geninfo_all_blocks=1 00:32:00.237 --rc geninfo_unexecuted_blocks=1 00:32:00.237 00:32:00.237 ' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:00.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:00.237 --rc genhtml_branch_coverage=1 00:32:00.237 --rc genhtml_function_coverage=1 00:32:00.237 --rc genhtml_legend=1 00:32:00.237 --rc geninfo_all_blocks=1 00:32:00.237 --rc geninfo_unexecuted_blocks=1 00:32:00.237 00:32:00.237 ' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:00.237 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:00.238 05:53:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:06.810 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:06.810 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:06.810 Found net devices under 0000:86:00.0: cvl_0_0 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.810 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:06.811 Found net devices under 0000:86:00.1: cvl_0_1 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:06.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:32:06.811 00:32:06.811 --- 10.0.0.2 ping statistics --- 00:32:06.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.811 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:32:06.811 00:32:06.811 --- 10.0.0.1 ping statistics --- 00:32:06.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.811 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1979541 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1979541 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1979541 ']' 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.811 05:53:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.811 [2024-11-27 05:53:54.046003] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.811 [2024-11-27 05:53:54.046939] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:32:06.811 [2024-11-27 05:53:54.046978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.811 [2024-11-27 05:53:54.125978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.811 [2024-11-27 05:53:54.166102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.811 [2024-11-27 05:53:54.166134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.811 [2024-11-27 05:53:54.166142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.811 [2024-11-27 05:53:54.166148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.811 [2024-11-27 05:53:54.166153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.811 [2024-11-27 05:53:54.166713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.811 [2024-11-27 05:53:54.234294] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.811 [2024-11-27 05:53:54.234541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.811 [2024-11-27 05:53:54.299395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.811 [2024-11-27 05:53:54.323601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.811 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 malloc0 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:06.812 { 00:32:06.812 "params": { 00:32:06.812 "name": "Nvme$subsystem", 00:32:06.812 "trtype": "$TEST_TRANSPORT", 00:32:06.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.812 "adrfam": "ipv4", 00:32:06.812 "trsvcid": "$NVMF_PORT", 00:32:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.812 "hdgst": ${hdgst:-false}, 00:32:06.812 "ddgst": ${ddgst:-false} 00:32:06.812 }, 00:32:06.812 "method": "bdev_nvme_attach_controller" 00:32:06.812 } 00:32:06.812 EOF 00:32:06.812 )") 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:06.812 05:53:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:06.812 "params": { 00:32:06.812 "name": "Nvme1", 00:32:06.812 "trtype": "tcp", 00:32:06.812 "traddr": "10.0.0.2", 00:32:06.812 "adrfam": "ipv4", 00:32:06.812 "trsvcid": "4420", 00:32:06.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.812 "hdgst": false, 00:32:06.812 "ddgst": false 00:32:06.812 }, 00:32:06.812 "method": "bdev_nvme_attach_controller" 00:32:06.812 }' 00:32:06.812 [2024-11-27 05:53:54.415952] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:32:06.812 [2024-11-27 05:53:54.416005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979648 ] 00:32:06.812 [2024-11-27 05:53:54.491034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.812 [2024-11-27 05:53:54.531784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.812 Running I/O for 10 seconds... 00:32:09.126 8500.00 IOPS, 66.41 MiB/s [2024-11-27T04:53:58.066Z] 8582.00 IOPS, 67.05 MiB/s [2024-11-27T04:53:59.003Z] 8610.00 IOPS, 67.27 MiB/s [2024-11-27T04:53:59.940Z] 8607.25 IOPS, 67.24 MiB/s [2024-11-27T04:54:00.876Z] 8611.40 IOPS, 67.28 MiB/s [2024-11-27T04:54:01.813Z] 8595.17 IOPS, 67.15 MiB/s [2024-11-27T04:54:02.751Z] 8602.57 IOPS, 67.21 MiB/s [2024-11-27T04:54:04.130Z] 8612.62 IOPS, 67.29 MiB/s [2024-11-27T04:54:05.067Z] 8618.89 IOPS, 67.34 MiB/s [2024-11-27T04:54:05.067Z] 8623.60 IOPS, 67.37 MiB/s 00:32:17.063 Latency(us) 00:32:17.063 [2024-11-27T04:54:05.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.063 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:17.063 Verification LBA range: start 0x0 length 0x1000 00:32:17.063 Nvme1n1 : 10.01 8627.49 67.40 0.00 0.00 14794.74 2402.99 21096.35 00:32:17.063 [2024-11-27T04:54:05.067Z] =================================================================================================================== 00:32:17.063 [2024-11-27T04:54:05.067Z] Total : 8627.49 67.40 0.00 0.00 14794.74 2402.99 21096.35 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1981375 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:17.063 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:17.063 { 00:32:17.063 "params": { 00:32:17.063 "name": "Nvme$subsystem", 00:32:17.063 "trtype": "$TEST_TRANSPORT", 00:32:17.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:17.063 "adrfam": "ipv4", 00:32:17.064 "trsvcid": "$NVMF_PORT", 00:32:17.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:17.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:17.064 "hdgst": ${hdgst:-false}, 00:32:17.064 "ddgst": ${ddgst:-false} 00:32:17.064 }, 00:32:17.064 "method": "bdev_nvme_attach_controller" 00:32:17.064 } 00:32:17.064 EOF 00:32:17.064 )") 00:32:17.064 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:17.064 [2024-11-27 05:54:04.927059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:04.927095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:17.064 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:17.064 05:54:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:17.064 "params": { 00:32:17.064 "name": "Nvme1", 00:32:17.064 "trtype": "tcp", 00:32:17.064 "traddr": "10.0.0.2", 00:32:17.064 "adrfam": "ipv4", 00:32:17.064 "trsvcid": "4420", 00:32:17.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:17.064 "hdgst": false, 00:32:17.064 "ddgst": false 00:32:17.064 }, 00:32:17.064 "method": "bdev_nvme_attach_controller" 00:32:17.064 }' 00:32:17.064 [2024-11-27 05:54:04.939023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:04.939035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:04.951020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:04.951030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:04.963021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:04.963032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:04.968022] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:32:17.064 [2024-11-27 05:54:04.968062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1981375 ] 00:32:17.064 [2024-11-27 05:54:04.975021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:04.975032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:04.987020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:04.987029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:04.999021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:04.999030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:05.011027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:05.011040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:05.023022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:05.023031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:05.035021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:05.035030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:05.043779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.064 [2024-11-27 05:54:05.047020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:05.047030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.064 [2024-11-27 05:54:05.059024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.064 [2024-11-27 05:54:05.059037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.071030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.071040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.083023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.083035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.085129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.324 [2024-11-27 05:54:05.095030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.095043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.107033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.107054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.119026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.119041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.131029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.131045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.143027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.143041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.155021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.155032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.167027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.167043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.179029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.179045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.191032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.191047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.203027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.203041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.215024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.215036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.227034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.227044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.239020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.239029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.251031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.251045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.263020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.263029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.275020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.275032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.287018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.287027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.299024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.299036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.311021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.311030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.324 [2024-11-27 05:54:05.323022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.324 [2024-11-27 05:54:05.323031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.335021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.335033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.347026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.347044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.359025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.359040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 Running I/O for 5 seconds... 00:32:17.583 [2024-11-27 05:54:05.374681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.374700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.388885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.388904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.403975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.403994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.419094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.419112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.431430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.431447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.444943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.444961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.459505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.459523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.471751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.471768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.487552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.487569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.499542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.499559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.512758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.512776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.527556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.527574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.542782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.542801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.557358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.557376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.571594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.571611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.583 [2024-11-27 05:54:05.583438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.583 [2024-11-27 05:54:05.583455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.842 [2024-11-27 05:54:05.596605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.842 [2024-11-27 05:54:05.596624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.842 [2024-11-27 05:54:05.610878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.610897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.623713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.623730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.636483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.636501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.651561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.651579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.662141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.662163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.677004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.677023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.691287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.691305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.703934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.703952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.716828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.716846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.731743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.731761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.743274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.743292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.757161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.757180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.771703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.771721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.786568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.786585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.800780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.800799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.815171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.815189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.827812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.827829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:17.843 [2024-11-27 05:54:05.840837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:17.843 [2024-11-27 05:54:05.840855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.855519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.855537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.866756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.866773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.881247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.881265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.895728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.895745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.911330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.911347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.926550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.926568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.940893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.940910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.955738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.955756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.971342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.971360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.984043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.984062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:05.999516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:05.999535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:06.015250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:06.015268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:06.027595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:06.027613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:06.040851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:06.040869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:06.055582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:06.055600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:06.071061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:06.071078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:06.084998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:06.085017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.102 [2024-11-27 05:54:06.099236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.102 [2024-11-27 05:54:06.099254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.109650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.109674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.124500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.124519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.138990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.139009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.151535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.151553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.164828] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.164846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.179544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.179562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.190904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.190922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.204448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.204466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.214697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.214715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.229285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.229303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.243995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.244013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.255265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.255282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.268860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.268878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.283287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.283305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.293817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.293835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.308490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.308508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.322861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.322879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.336575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.336593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.351085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.351103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.361 [2024-11-27 05:54:06.361993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.361 [2024-11-27 05:54:06.362011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.621 16820.00 IOPS, 131.41 MiB/s [2024-11-27T04:54:06.625Z] [2024-11-27 05:54:06.376718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.621 [2024-11-27 05:54:06.376736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.621 [2024-11-27 05:54:06.391169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.621 [2024-11-27 05:54:06.391188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.621 [2024-11-27 05:54:06.403074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.621 [2024-11-27 05:54:06.403092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.621 [2024-11-27 05:54:06.416959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.621 [2024-11-27 05:54:06.416978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.621 [2024-11-27 05:54:06.431816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.621 [2024-11-27 05:54:06.431834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.621 [2024-11-27 05:54:06.446931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.446949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.460949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.460968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.475471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.475488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.486767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.486784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.500613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.500631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.515175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.515193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.528302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.528320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.539441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.539458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.553148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.553170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.567796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.567813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.582549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.582567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.596655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.596677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.622 [2024-11-27 05:54:06.611230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.622 [2024-11-27 05:54:06.611248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.623966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.623985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.636371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.636388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.650618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.650635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.663257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.663274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.677079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.677097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.691583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.691601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.702554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.702571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.716796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.716813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.731074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.731092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.743559] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.743576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.756988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.757006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.771406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.771423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.787102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.787120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.800173] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.800190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.810371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.810393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.824771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.824791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.839133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.839152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.850249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.850268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.864787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.864807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.881 [2024-11-27 05:54:06.879583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.881 [2024-11-27 05:54:06.879602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.895185] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.895204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.906243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.906261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.920662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.920687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.934994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.935012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.948895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.948913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.963819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.963838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.974314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.974332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:06.988641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:06.988660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.003375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.003393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.015629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.015646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.029234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.029252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.043702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.043720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.058902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.058921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.073125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.073148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.087872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.087890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.098402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.139 [2024-11-27 05:54:07.098421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.139 [2024-11-27 05:54:07.113095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.140 [2024-11-27 05:54:07.113114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.140 [2024-11-27 05:54:07.127336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.140 [2024-11-27 05:54:07.127354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.140 [2024-11-27 05:54:07.138590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.140 [2024-11-27 05:54:07.138609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.152553] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.152571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.167463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.167481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.182306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.182324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.197053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.197070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.211789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.211807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.226416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.226434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.239971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.239988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.254818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.254837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.268068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.268086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.277893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.277911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.292858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.292876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.307912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.307929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.323265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.323283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.335792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.335809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.347947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.347964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.359362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.359378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 16917.00 IOPS, 132.16 MiB/s [2024-11-27T04:54:07.403Z] [2024-11-27 05:54:07.372722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.372740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.399 [2024-11-27 05:54:07.387466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.399 [2024-11-27 05:54:07.387483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.658 [2024-11-27 05:54:07.403440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.403458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.418857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.418875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.433314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.433332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.447826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.447844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.462955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.462974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.476406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.476423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.486286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.486303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.500758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.500777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.515397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.515414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.527282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.527300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.540560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.540578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.551406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.551423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.564772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.564789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.579496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.579514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.590635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.590652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.604803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.604821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.619017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.619035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.629502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.629520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.643909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.643927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.659 [2024-11-27 05:54:07.655311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.659 [2024-11-27 05:54:07.655329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.668531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.668548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.679330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.679348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.692998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.693015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.707720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.707738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.723640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.723657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.739464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.739481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.755148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.755166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.768088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.768106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.780717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.780735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.795136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.795154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.807440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.807457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.820418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.820436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.831585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.831608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.844601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.918 [2024-11-27 05:54:07.844619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.918 [2024-11-27 05:54:07.859450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.919 [2024-11-27 05:54:07.859468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.919 [2024-11-27 05:54:07.874869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.919 [2024-11-27 05:54:07.874887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.919 [2024-11-27 05:54:07.889219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.919 [2024-11-27 05:54:07.889237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.919 [2024-11-27 05:54:07.904117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.919 [2024-11-27 05:54:07.904134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.919 [2024-11-27 05:54:07.918823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.919 [2024-11-27 05:54:07.918841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:07.932051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:07.932069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:07.944128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:07.944146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:07.955639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:07.955657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:07.967049] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:07.967067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:07.980896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:07.980914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:07.995905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:07.995933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.010649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.010666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.023936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.023953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.039404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.039422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.051794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.051811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.064927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.064945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.080062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.080079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.090395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.090418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.104554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.104572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.114932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.114950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.128982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.129000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.144087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.144104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.159641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.159658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.178 [2024-11-27 05:54:08.174403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.178 [2024-11-27 05:54:08.174421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.437 [2024-11-27 05:54:08.189526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.437 [2024-11-27 05:54:08.189546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.437 [2024-11-27 05:54:08.204368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.437 [2024-11-27 05:54:08.204387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.437 [2024-11-27 05:54:08.219054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.219073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.230613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.230632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.245487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.245506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.260057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.260077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.274972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.274991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.287711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.287729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.300427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.300446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.311060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.311080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.324586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.324605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.339270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.339289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.350524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.350548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.365032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.365051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 16853.33 IOPS, 131.67 MiB/s [2024-11-27T04:54:08.442Z] [2024-11-27 05:54:08.379847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.379866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.391290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.391307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.404784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.404802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.419095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.419115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.438 [2024-11-27 05:54:08.431075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.438 [2024-11-27 05:54:08.431093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.445312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.445332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.460166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.460186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.475010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.475028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.486138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.486156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.501495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.501512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.515933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.515952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.526098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.526116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.541079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.541099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.555858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.555877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.567156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.567173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.581620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.581638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.596488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.596506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.611164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.611188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.621705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.621723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.636761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.636781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.651141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.651160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.663832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.663850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.676074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.676103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.697 [2024-11-27 05:54:08.690935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.697 [2024-11-27 05:54:08.690952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.705354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.705373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.720175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.720193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.735366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.735384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.748621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.748638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.763516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.763534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.773891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.773909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.788705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.788724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.803336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.803355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.817026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.817044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.831852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.831871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.843337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.843354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.856845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.856864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.871609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.871632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.886422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.886440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.900934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.900952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.915112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.915130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.927982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.928000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.940495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.940512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.957 [2024-11-27 05:54:08.955478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.957 [2024-11-27 05:54:08.955496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:08.970383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:08.970401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:08.984816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:08.984835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:08.999490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:08.999507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.014972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.014991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.027713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.027731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.043031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.043051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.054976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.054995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.069229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.069248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.083821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.083839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.098694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.098728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.112744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.112763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.127529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.127547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.142808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.142826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.156839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.156857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.171353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.171371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.186571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.186590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.200019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.200037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-27 05:54:09.214688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-27 05:54:09.214707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.225977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.225996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.240930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.240949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.255155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.255173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.266043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.266061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.280571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.280590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.294756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.294775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.307580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.307598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.322969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.322987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.335012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.335029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.349099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.349117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.363433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.363451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 16855.25 IOPS, 131.68 MiB/s [2024-11-27T04:54:09.480Z] [2024-11-27 05:54:09.375275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.375292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.389129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.389152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.403525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.403543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.419246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.419264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.431689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.431707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.445085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.445103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.459547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.459564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.476 [2024-11-27 05:54:09.474711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.476 [2024-11-27 05:54:09.474729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.488868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.488886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.503544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.503562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.514996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.515014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.528490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.528508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.542655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.542681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.556033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.556051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.568131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.568149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.579264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.579281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.592971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.592990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.607637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.607666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.619403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.619421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.633153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.633172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.647542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.647568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.662707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.735 [2024-11-27 05:54:09.662725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.735 [2024-11-27 05:54:09.675754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.736 [2024-11-27 05:54:09.675771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.736 [2024-11-27 05:54:09.688277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.736 [2024-11-27 05:54:09.688295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.736 [2024-11-27 05:54:09.702490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.736 [2024-11-27 05:54:09.702508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.736 [2024-11-27 05:54:09.715792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.736 [2024-11-27 05:54:09.715810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.736 [2024-11-27 05:54:09.731358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.736 [2024-11-27 05:54:09.731376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.747380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.747399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.759502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.759520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.772473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.772492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.783400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.783419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.797051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.797069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.811257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.811276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.823562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.823581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.836752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.836771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.851448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.851466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.867232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.867250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.880144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.880162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.891160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.891179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.904268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.904291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.919139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.919157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.929737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.929754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.944315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.944333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.959933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.959952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.974319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.974339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.995 [2024-11-27 05:54:09.988628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.995 [2024-11-27 05:54:09.988647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.003248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.003268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.012237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.012258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.027399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.027419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.042849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.042868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.056537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.056555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.072942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.072961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.087611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.087629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.103274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.103293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.254 [2024-11-27 05:54:10.117046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.254 [2024-11-27 05:54:10.117064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.131934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.131952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.146619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.146637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.161342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.161360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.176235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.176258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.190967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.190984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.202364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.202382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.217095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.217113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.232076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.232094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.255 [2024-11-27 05:54:10.247042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.255 [2024-11-27 05:54:10.247062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.261279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.261298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.276007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.276025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.290991] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.291009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.303740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.303758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.318690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.318708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.333297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.333315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.348497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.348516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.363126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.363143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 16851.60 IOPS, 131.65 MiB/s [2024-11-27T04:54:10.518Z] [2024-11-27 05:54:10.375462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.375479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 00:32:22.514 Latency(us) 00:32:22.514 [2024-11-27T04:54:10.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:22.514 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:22.514 Nvme1n1 : 5.01 16853.09 131.66 0.00 0.00 7587.97 1997.29 13668.94 00:32:22.514 [2024-11-27T04:54:10.518Z] =================================================================================================================== 00:32:22.514 [2024-11-27T04:54:10.518Z] Total : 16853.09 131.66 0.00 0.00 7587.97 1997.29 13668.94 00:32:22.514 [2024-11-27 05:54:10.387030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.387047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.399027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.399040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.411047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.411068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.423028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.423044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.435034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.435050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.447026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.447042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.459026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.459041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.471025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.471040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.483023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.483036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.495023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.495033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.514 [2024-11-27 05:54:10.507026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.514 [2024-11-27 05:54:10.507036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.773 [2024-11-27 05:54:10.519027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.773 [2024-11-27 05:54:10.519042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.773 [2024-11-27 05:54:10.531044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.773 [2024-11-27 05:54:10.531056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1981375) - No such process 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1981375 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:22.773 delay0 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.773 05:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:22.773 [2024-11-27 05:54:10.677684] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:30.896 Initializing NVMe Controllers 00:32:30.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:30.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:30.896 Initialization complete. Launching workers. 00:32:30.896 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 269, failed: 20792 00:32:30.896 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20965, failed to submit 96 00:32:30.896 success 20868, unsuccessful 97, failed 0 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:30.896 rmmod nvme_tcp 00:32:30.896 rmmod nvme_fabrics 00:32:30.896 rmmod nvme_keyring 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1979541 ']' 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1979541 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1979541 ']' 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1979541 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1979541 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1979541' 00:32:30.896 killing process with pid 1979541 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1979541 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1979541 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.896 05:54:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.896 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.896 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.896 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.896 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.896 05:54:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.276 00:32:32.276 real 0m32.132s 00:32:32.276 user 0m41.362s 00:32:32.276 sys 0m13.081s 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:32.276 ************************************ 00:32:32.276 END TEST nvmf_zcopy 00:32:32.276 ************************************ 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:32.276 ************************************ 00:32:32.276 START TEST nvmf_nmic 00:32:32.276 ************************************ 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:32.276 * Looking for test storage... 00:32:32.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:32.276 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:32.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.536 --rc genhtml_branch_coverage=1 00:32:32.536 --rc genhtml_function_coverage=1 00:32:32.536 --rc genhtml_legend=1 00:32:32.536 --rc geninfo_all_blocks=1 00:32:32.536 --rc geninfo_unexecuted_blocks=1 00:32:32.536 00:32:32.536 ' 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:32.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.536 --rc genhtml_branch_coverage=1 00:32:32.536 --rc genhtml_function_coverage=1 00:32:32.536 --rc genhtml_legend=1 00:32:32.536 --rc geninfo_all_blocks=1 00:32:32.536 --rc geninfo_unexecuted_blocks=1 00:32:32.536 00:32:32.536 ' 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:32.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.536 --rc genhtml_branch_coverage=1 00:32:32.536 --rc genhtml_function_coverage=1 00:32:32.536 --rc genhtml_legend=1 00:32:32.536 --rc geninfo_all_blocks=1 00:32:32.536 --rc geninfo_unexecuted_blocks=1 00:32:32.536 00:32:32.536 ' 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:32.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.536 --rc genhtml_branch_coverage=1 00:32:32.536 --rc genhtml_function_coverage=1 00:32:32.536 --rc genhtml_legend=1 00:32:32.536 --rc geninfo_all_blocks=1 00:32:32.536 --rc geninfo_unexecuted_blocks=1 00:32:32.536 00:32:32.536 ' 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.536 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.537 05:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:39.112 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.112 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:39.112 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:39.113 Found net devices under 0000:86:00.0: cvl_0_0 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:39.113 Found net devices under 0000:86:00.1: cvl_0_1 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.113 05:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:32:39.113 00:32:39.113 --- 10.0.0.2 ping statistics --- 00:32:39.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.113 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:32:39.113 00:32:39.113 --- 10.0.0.1 ping statistics --- 00:32:39.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.113 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1987331 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1987331 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1987331 ']' 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.113 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.113 [2024-11-27 05:54:26.301855] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:39.113 [2024-11-27 05:54:26.302759] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:32:39.113 [2024-11-27 05:54:26.302794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.113 [2024-11-27 05:54:26.380402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:39.113 [2024-11-27 05:54:26.423352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.113 [2024-11-27 05:54:26.423391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.113 [2024-11-27 05:54:26.423397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.114 [2024-11-27 05:54:26.423403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.114 [2024-11-27 05:54:26.423407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.114 [2024-11-27 05:54:26.424916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.114 [2024-11-27 05:54:26.425030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.114 [2024-11-27 05:54:26.425138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.114 [2024-11-27 05:54:26.425139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.114 [2024-11-27 05:54:26.493995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:39.114 [2024-11-27 05:54:26.494311] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:39.114 [2024-11-27 05:54:26.494826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:39.114 [2024-11-27 05:54:26.495019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:39.114 [2024-11-27 05:54:26.495087] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 [2024-11-27 05:54:26.557853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 Malloc0 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 [2024-11-27 05:54:26.638146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:39.114 test case1: single bdev can't be used in multiple subsystems 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 [2024-11-27 05:54:26.673538] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:39.114 [2024-11-27 05:54:26.673565] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:39.114 [2024-11-27 05:54:26.673572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:39.114 request: 00:32:39.114 { 00:32:39.114 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:39.114 "namespace": { 00:32:39.114 "bdev_name": "Malloc0", 00:32:39.114 "no_auto_visible": false, 00:32:39.114 "hide_metadata": false 00:32:39.114 }, 00:32:39.114 "method": "nvmf_subsystem_add_ns", 00:32:39.114 "req_id": 1 00:32:39.114 } 00:32:39.114 Got JSON-RPC error response 00:32:39.114 response: 00:32:39.114 { 00:32:39.114 "code": -32602, 00:32:39.114 "message": "Invalid parameters" 00:32:39.114 } 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:39.114 Adding namespace failed - expected result. 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:39.114 test case2: host connect to nvmf target in multiple paths 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.114 [2024-11-27 05:54:26.685648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:39.114 05:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:39.374 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:39.374 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:39.374 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:39.374 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:39.374 05:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:41.279 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:41.279 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:41.279 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:41.279 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:41.279 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:41.279 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:41.279 05:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:41.279 [global] 00:32:41.279 thread=1 00:32:41.279 invalidate=1 00:32:41.279 rw=write 00:32:41.279 time_based=1 00:32:41.279 runtime=1 00:32:41.279 ioengine=libaio 00:32:41.279 direct=1 00:32:41.279 bs=4096 00:32:41.279 iodepth=1 00:32:41.279 norandommap=0 00:32:41.279 numjobs=1 00:32:41.279 00:32:41.279 verify_dump=1 00:32:41.279 verify_backlog=512 00:32:41.279 verify_state_save=0 00:32:41.279 do_verify=1 00:32:41.279 verify=crc32c-intel 00:32:41.279 [job0] 00:32:41.279 filename=/dev/nvme0n1 00:32:41.279 Could not set queue depth (nvme0n1) 00:32:41.537 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:41.537 fio-3.35 00:32:41.537 Starting 1 thread 00:32:42.914 00:32:42.914 job0: (groupid=0, jobs=1): err= 0: pid=1987961: Wed Nov 27 05:54:30 2024 00:32:42.914 read: IOPS=2486, BW=9946KiB/s (10.2MB/s)(9956KiB/1001msec) 00:32:42.914 slat (nsec): min=6259, max=28249, avg=7302.99, stdev=1010.51 00:32:42.914 clat (usec): min=183, max=40808, avg=223.56, stdev=815.22 00:32:42.914 lat (usec): min=190, max=40815, avg=230.86, stdev=815.23 00:32:42.914 clat percentiles (usec): 00:32:42.914 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:32:42.914 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:32:42.914 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 219], 95.00th=[ 229], 00:32:42.914 | 99.00th=[ 260], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 2507], 00:32:42.914 | 99.99th=[40633] 00:32:42.914 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:42.914 slat (nsec): min=9114, max=47004, avg=10322.23, stdev=1819.01 00:32:42.914 clat (usec): min=120, max=3157, avg=151.71, stdev=76.70 00:32:42.914 lat (usec): min=132, max=3168, avg=162.03, stdev=76.84 00:32:42.914 clat percentiles (usec): 00:32:42.914 | 1.00th=[ 126], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 130], 00:32:42.914 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 135], 00:32:42.914 | 70.00th=[ 139], 80.00th=[ 167], 90.00th=[ 212], 95.00th=[ 251], 00:32:42.914 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 758], 99.95th=[ 1565], 00:32:42.914 | 99.99th=[ 3163] 00:32:42.914 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:32:42.914 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:32:42.914 lat (usec) : 250=95.88%, 500=4.02%, 1000=0.02% 00:32:42.914 lat (msec) : 2=0.02%, 4=0.04%, 50=0.02% 00:32:42.914 cpu : usr=2.30%, sys=4.70%, ctx=5049, majf=0, minf=1 00:32:42.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.914 issued rwts: total=2489,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:42.914 00:32:42.914 Run status group 0 (all jobs): 00:32:42.914 READ: bw=9946KiB/s (10.2MB/s), 9946KiB/s-9946KiB/s (10.2MB/s-10.2MB/s), io=9956KiB (10.2MB), run=1001-1001msec 00:32:42.914 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:32:42.914 00:32:42.914 Disk stats (read/write): 00:32:42.914 nvme0n1: ios=2097/2541, merge=0/0, ticks=472/360, in_queue=832, util=91.08% 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:42.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.914 rmmod nvme_tcp 00:32:42.914 rmmod nvme_fabrics 00:32:42.914 rmmod nvme_keyring 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1987331 ']' 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1987331 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1987331 ']' 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1987331 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987331 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987331' 00:32:42.914 killing process with pid 1987331 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1987331 00:32:42.914 05:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1987331 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.173 05:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.711 00:32:45.711 real 0m12.986s 00:32:45.711 user 0m23.809s 00:32:45.711 sys 0m6.098s 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:45.711 ************************************ 00:32:45.711 END TEST nvmf_nmic 00:32:45.711 ************************************ 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:45.711 ************************************ 00:32:45.711 START TEST nvmf_fio_target 00:32:45.711 ************************************ 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:45.711 * Looking for test storage... 00:32:45.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.711 --rc genhtml_branch_coverage=1 00:32:45.711 --rc genhtml_function_coverage=1 00:32:45.711 --rc genhtml_legend=1 00:32:45.711 --rc geninfo_all_blocks=1 00:32:45.711 --rc geninfo_unexecuted_blocks=1 00:32:45.711 00:32:45.711 ' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.711 --rc genhtml_branch_coverage=1 00:32:45.711 --rc genhtml_function_coverage=1 00:32:45.711 --rc genhtml_legend=1 00:32:45.711 --rc geninfo_all_blocks=1 00:32:45.711 --rc geninfo_unexecuted_blocks=1 00:32:45.711 00:32:45.711 ' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.711 --rc genhtml_branch_coverage=1 00:32:45.711 --rc genhtml_function_coverage=1 00:32:45.711 --rc genhtml_legend=1 00:32:45.711 --rc geninfo_all_blocks=1 00:32:45.711 --rc geninfo_unexecuted_blocks=1 00:32:45.711 00:32:45.711 ' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.711 --rc genhtml_branch_coverage=1 00:32:45.711 --rc genhtml_function_coverage=1 00:32:45.711 --rc genhtml_legend=1 00:32:45.711 --rc geninfo_all_blocks=1 00:32:45.711 --rc geninfo_unexecuted_blocks=1 00:32:45.711 00:32:45.711 ' 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.711 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.712 05:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:52.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:52.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.284 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:52.285 Found net devices under 0000:86:00.0: cvl_0_0 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:52.285 Found net devices under 0000:86:00.1: cvl_0_1 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:32:52.285 00:32:52.285 --- 10.0.0.2 ping statistics --- 00:32:52.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.285 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:52.285 00:32:52.285 --- 10.0.0.1 ping statistics --- 00:32:52.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.285 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1991724 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1991724 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1991724 ']' 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.285 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.285 [2024-11-27 05:54:39.413970] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:52.285 [2024-11-27 05:54:39.414832] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:32:52.285 [2024-11-27 05:54:39.414864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.285 [2024-11-27 05:54:39.492908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:52.285 [2024-11-27 05:54:39.534867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.285 [2024-11-27 05:54:39.534907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.285 [2024-11-27 05:54:39.534914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.285 [2024-11-27 05:54:39.534920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.285 [2024-11-27 05:54:39.534925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.285 [2024-11-27 05:54:39.536528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.285 [2024-11-27 05:54:39.536645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.285 [2024-11-27 05:54:39.536755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.285 [2024-11-27 05:54:39.536756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.285 [2024-11-27 05:54:39.605875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:52.286 [2024-11-27 05:54:39.606082] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:52.286 [2024-11-27 05:54:39.606685] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:52.286 [2024-11-27 05:54:39.606913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:52.286 [2024-11-27 05:54:39.606965] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:52.286 [2024-11-27 05:54:39.849449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.286 05:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.286 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:52.286 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.544 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:52.544 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.803 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:52.803 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:52.803 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:52.803 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:53.062 05:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.321 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:53.321 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.580 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:53.580 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.580 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:53.580 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:53.839 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:54.098 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:54.098 05:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:54.356 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:54.356 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:54.356 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.615 [2024-11-27 05:54:42.485349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.615 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:54.874 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:55.133 05:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:55.392 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:55.392 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:55.392 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:55.392 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:55.392 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:55.392 05:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:57.296 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:57.296 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:57.296 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:57.296 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:57.296 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:57.296 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:57.296 05:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:57.296 [global] 00:32:57.296 thread=1 00:32:57.296 invalidate=1 00:32:57.296 rw=write 00:32:57.296 time_based=1 00:32:57.296 runtime=1 00:32:57.296 ioengine=libaio 00:32:57.296 direct=1 00:32:57.296 bs=4096 00:32:57.296 iodepth=1 00:32:57.296 norandommap=0 00:32:57.296 numjobs=1 00:32:57.296 00:32:57.296 verify_dump=1 00:32:57.296 verify_backlog=512 00:32:57.296 verify_state_save=0 00:32:57.296 do_verify=1 00:32:57.296 verify=crc32c-intel 00:32:57.296 [job0] 00:32:57.296 filename=/dev/nvme0n1 00:32:57.296 [job1] 00:32:57.296 filename=/dev/nvme0n2 00:32:57.296 [job2] 00:32:57.296 filename=/dev/nvme0n3 00:32:57.296 [job3] 00:32:57.296 filename=/dev/nvme0n4 00:32:57.555 Could not set queue depth (nvme0n1) 00:32:57.555 Could not set queue depth (nvme0n2) 00:32:57.555 Could not set queue depth (nvme0n3) 00:32:57.555 Could not set queue depth (nvme0n4) 00:32:57.814 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.814 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.814 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.814 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.814 fio-3.35 00:32:57.814 Starting 4 threads 00:32:59.192 00:32:59.192 job0: (groupid=0, jobs=1): err= 0: pid=1992843: Wed Nov 27 05:54:46 2024 00:32:59.192 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:32:59.192 slat (nsec): min=9954, max=24781, avg=22101.64, stdev=3094.80 00:32:59.192 clat (usec): min=40501, max=41072, avg=40949.47, stdev=111.68 00:32:59.192 lat (usec): min=40511, max=41091, avg=40971.58, stdev=113.97 00:32:59.192 clat percentiles (usec): 00:32:59.192 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:59.192 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:59.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:59.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:59.192 | 99.99th=[41157] 00:32:59.192 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:32:59.192 slat (usec): min=10, max=945, avg=13.98, stdev=41.34 00:32:59.192 clat (usec): min=133, max=294, avg=181.81, stdev=18.00 00:32:59.192 lat (usec): min=146, max=1110, avg=195.79, stdev=44.54 00:32:59.192 clat percentiles (usec): 00:32:59.192 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:32:59.192 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:32:59.192 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 210], 00:32:59.192 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 293], 00:32:59.192 | 99.99th=[ 293] 00:32:59.192 bw ( KiB/s): min= 4096, max= 4096, per=17.89%, avg=4096.00, stdev= 0.00, samples=1 00:32:59.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:59.192 lat (usec) : 250=94.76%, 500=1.12% 00:32:59.192 lat (msec) : 50=4.12% 00:32:59.192 cpu : usr=0.30%, sys=1.00%, ctx=536, majf=0, minf=1 00:32:59.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.192 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.192 job1: (groupid=0, jobs=1): err= 0: pid=1992844: Wed Nov 27 05:54:46 2024 00:32:59.192 read: IOPS=2492, BW=9970KiB/s (10.2MB/s)(9980KiB/1001msec) 00:32:59.192 slat (nsec): min=6576, max=25232, avg=7438.53, stdev=835.15 00:32:59.192 clat (usec): min=182, max=433, avg=219.33, stdev=19.42 00:32:59.192 lat (usec): min=190, max=440, avg=226.77, stdev=19.42 00:32:59.192 clat percentiles (usec): 00:32:59.192 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 204], 00:32:59.192 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:32:59.192 | 70.00th=[ 223], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 253], 00:32:59.192 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 322], 99.95th=[ 429], 00:32:59.192 | 99.99th=[ 433] 00:32:59.192 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:59.192 slat (usec): min=4, max=666, avg=10.90, stdev=13.03 00:32:59.192 clat (usec): min=123, max=553, avg=154.41, stdev=21.68 00:32:59.192 lat (usec): min=134, max=892, avg=165.31, stdev=26.11 00:32:59.192 clat percentiles (usec): 00:32:59.192 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:32:59.192 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:32:59.192 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 182], 95.00th=[ 192], 00:32:59.192 | 99.00th=[ 227], 99.50th=[ 262], 99.90th=[ 318], 99.95th=[ 359], 00:32:59.192 | 99.99th=[ 553] 00:32:59.192 bw ( KiB/s): min=12288, max=12288, per=53.68%, avg=12288.00, stdev= 0.00, samples=1 00:32:59.192 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:59.192 lat (usec) : 250=95.73%, 500=4.25%, 750=0.02% 00:32:59.192 cpu : usr=2.70%, sys=4.50%, ctx=5057, majf=0, minf=1 00:32:59.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.192 issued rwts: total=2495,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.192 job2: (groupid=0, jobs=1): err= 0: pid=1992845: Wed Nov 27 05:54:46 2024 00:32:59.192 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:59.192 slat (nsec): min=7562, max=29980, avg=8709.14, stdev=992.97 00:32:59.192 clat (usec): min=232, max=461, avg=262.25, stdev=14.64 00:32:59.192 lat (usec): min=241, max=470, avg=270.96, stdev=14.83 00:32:59.192 clat percentiles (usec): 00:32:59.192 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 251], 00:32:59.192 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:32:59.192 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:32:59.192 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 433], 99.95th=[ 445], 00:32:59.192 | 99.99th=[ 461] 00:32:59.192 write: IOPS=2267, BW=9071KiB/s (9289kB/s)(9080KiB/1001msec); 0 zone resets 00:32:59.192 slat (nsec): min=9880, max=52652, avg=12731.91, stdev=1950.11 00:32:59.192 clat (usec): min=150, max=367, avg=177.61, stdev=14.09 00:32:59.192 lat (usec): min=162, max=406, avg=190.34, stdev=14.87 00:32:59.192 clat percentiles (usec): 00:32:59.192 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:32:59.192 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:32:59.192 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:32:59.192 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 334], 99.95th=[ 367], 00:32:59.192 | 99.99th=[ 367] 00:32:59.192 bw ( KiB/s): min= 8640, max= 8640, per=37.75%, avg=8640.00, stdev= 0.00, samples=1 00:32:59.192 iops : min= 2160, max= 2160, avg=2160.00, stdev= 0.00, samples=1 00:32:59.192 lat (usec) : 250=61.02%, 500=38.98% 00:32:59.192 cpu : usr=3.50%, sys=3.90%, ctx=4319, majf=0, minf=2 00:32:59.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.192 issued rwts: total=2048,2270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.192 job3: (groupid=0, jobs=1): err= 0: pid=1992846: Wed Nov 27 05:54:46 2024 00:32:59.192 read: IOPS=336, BW=1345KiB/s (1377kB/s)(1376KiB/1023msec) 00:32:59.192 slat (nsec): min=8070, max=27631, avg=9992.15, stdev=3752.88 00:32:59.192 clat (usec): min=206, max=41988, avg=2656.38, stdev=9564.04 00:32:59.192 lat (usec): min=214, max=42012, avg=2666.37, stdev=9567.33 00:32:59.192 clat percentiles (usec): 00:32:59.192 | 1.00th=[ 212], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 262], 00:32:59.192 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:32:59.192 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[41157], 00:32:59.192 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:32:59.192 | 99.99th=[42206] 00:32:59.192 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:32:59.193 slat (nsec): min=11774, max=36613, avg=13304.91, stdev=2244.30 00:32:59.193 clat (usec): min=155, max=656, avg=185.50, stdev=39.89 00:32:59.193 lat (usec): min=167, max=670, avg=198.81, stdev=40.42 00:32:59.193 clat percentiles (usec): 00:32:59.193 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:32:59.193 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:32:59.193 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 229], 00:32:59.193 | 99.00th=[ 355], 99.50th=[ 510], 99.90th=[ 660], 99.95th=[ 660], 00:32:59.193 | 99.99th=[ 660] 00:32:59.193 bw ( KiB/s): min= 4096, max= 4096, per=17.89%, avg=4096.00, stdev= 0.00, samples=1 00:32:59.193 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:59.193 lat (usec) : 250=61.57%, 500=35.63%, 750=0.35% 00:32:59.193 lat (msec) : 4=0.12%, 50=2.34% 00:32:59.193 cpu : usr=0.68%, sys=1.57%, ctx=857, majf=0, minf=1 00:32:59.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.193 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.193 issued rwts: total=344,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:59.193 00:32:59.193 Run status group 0 (all jobs): 00:32:59.193 READ: bw=18.7MiB/s (19.7MB/s), 87.6KiB/s-9970KiB/s (89.8kB/s-10.2MB/s), io=19.2MiB (20.1MB), run=1001-1023msec 00:32:59.193 WRITE: bw=22.4MiB/s (23.4MB/s), 2002KiB/s-9.99MiB/s (2050kB/s-10.5MB/s), io=22.9MiB (24.0MB), run=1001-1023msec 00:32:59.193 00:32:59.193 Disk stats (read/write): 00:32:59.193 nvme0n1: ios=71/512, merge=0/0, ticks=1078/86, in_queue=1164, util=97.60% 00:32:59.193 nvme0n2: ios=2055/2048, merge=0/0, ticks=692/315, in_queue=1007, util=98.04% 00:32:59.193 nvme0n3: ios=1536/1970, merge=0/0, ticks=403/336, in_queue=739, util=87.64% 00:32:59.193 nvme0n4: ios=395/512, merge=0/0, ticks=844/88, in_queue=932, util=97.90% 00:32:59.193 05:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:59.193 [global] 00:32:59.193 thread=1 00:32:59.193 invalidate=1 00:32:59.193 rw=randwrite 00:32:59.193 time_based=1 00:32:59.193 runtime=1 00:32:59.193 ioengine=libaio 00:32:59.193 direct=1 00:32:59.193 bs=4096 00:32:59.193 iodepth=1 00:32:59.193 norandommap=0 00:32:59.193 numjobs=1 00:32:59.193 00:32:59.193 verify_dump=1 00:32:59.193 verify_backlog=512 00:32:59.193 verify_state_save=0 00:32:59.193 do_verify=1 00:32:59.193 verify=crc32c-intel 00:32:59.193 [job0] 00:32:59.193 filename=/dev/nvme0n1 00:32:59.193 [job1] 00:32:59.193 filename=/dev/nvme0n2 00:32:59.193 [job2] 00:32:59.193 filename=/dev/nvme0n3 00:32:59.193 [job3] 00:32:59.193 filename=/dev/nvme0n4 00:32:59.193 Could not set queue depth (nvme0n1) 00:32:59.193 Could not set queue depth (nvme0n2) 00:32:59.193 Could not set queue depth (nvme0n3) 00:32:59.193 Could not set queue depth (nvme0n4) 00:32:59.193 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.193 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.193 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.193 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:59.193 fio-3.35 00:32:59.193 Starting 4 threads 00:33:00.570 00:33:00.570 job0: (groupid=0, jobs=1): err= 0: pid=1993213: Wed Nov 27 05:54:48 2024 00:33:00.570 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:00.570 slat (nsec): min=7191, max=44362, avg=8452.93, stdev=1363.77 00:33:00.570 clat (usec): min=173, max=40995, avg=261.32, stdev=900.91 00:33:00.570 lat (usec): min=181, max=41004, avg=269.78, stdev=900.91 00:33:00.570 clat percentiles (usec): 00:33:00.570 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 225], 00:33:00.570 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:33:00.570 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 297], 00:33:00.570 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 367], 99.95th=[ 367], 00:33:00.570 | 99.99th=[41157] 00:33:00.570 write: IOPS=2310, BW=9243KiB/s (9465kB/s)(9252KiB/1001msec); 0 zone resets 00:33:00.570 slat (nsec): min=10385, max=38464, avg=11887.97, stdev=1789.28 00:33:00.570 clat (usec): min=126, max=326, avg=175.90, stdev=26.63 00:33:00.570 lat (usec): min=138, max=337, avg=187.79, stdev=26.70 00:33:00.570 clat percentiles (usec): 00:33:00.570 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 157], 00:33:00.570 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:33:00.570 | 70.00th=[ 180], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 225], 00:33:00.570 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 322], 00:33:00.570 | 99.99th=[ 326] 00:33:00.570 bw ( KiB/s): min= 8192, max= 8192, per=25.21%, avg=8192.00, stdev= 0.00, samples=1 00:33:00.570 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:00.570 lat (usec) : 250=87.71%, 500=12.27% 00:33:00.570 lat (msec) : 50=0.02% 00:33:00.570 cpu : usr=3.40%, sys=7.30%, ctx=4364, majf=0, minf=1 00:33:00.570 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.570 issued rwts: total=2048,2313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.570 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.570 job1: (groupid=0, jobs=1): err= 0: pid=1993214: Wed Nov 27 05:54:48 2024 00:33:00.570 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:33:00.570 slat (nsec): min=6286, max=22898, avg=7289.47, stdev=1261.67 00:33:00.571 clat (usec): min=171, max=937, avg=276.33, stdev=66.66 00:33:00.571 lat (usec): min=178, max=945, avg=283.62, stdev=66.70 00:33:00.571 clat percentiles (usec): 00:33:00.571 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 229], 00:33:00.571 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 269], 00:33:00.571 | 70.00th=[ 289], 80.00th=[ 318], 90.00th=[ 347], 95.00th=[ 400], 00:33:00.571 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 685], 99.95th=[ 709], 00:33:00.571 | 99.99th=[ 938] 00:33:00.571 write: IOPS=2187, BW=8748KiB/s (8958kB/s)(8748KiB/1000msec); 0 zone resets 00:33:00.571 slat (nsec): min=8691, max=38841, avg=9804.63, stdev=1199.70 00:33:00.571 clat (usec): min=125, max=350, avg=177.76, stdev=29.73 00:33:00.571 lat (usec): min=134, max=389, avg=187.57, stdev=29.87 00:33:00.571 clat percentiles (usec): 00:33:00.571 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:33:00.571 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 172], 60.00th=[ 182], 00:33:00.571 | 70.00th=[ 194], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 231], 00:33:00.571 | 99.00th=[ 249], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 343], 00:33:00.571 | 99.99th=[ 351] 00:33:00.571 bw ( KiB/s): min= 9592, max= 9592, per=29.51%, avg=9592.00, stdev= 0.00, samples=1 00:33:00.571 iops : min= 2398, max= 2398, avg=2398.00, stdev= 0.00, samples=1 00:33:00.571 lat (usec) : 250=69.99%, 500=28.88%, 750=1.11%, 1000=0.02% 00:33:00.571 cpu : usr=1.60%, sys=4.20%, ctx=4235, majf=0, minf=2 00:33:00.571 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.571 issued rwts: total=2048,2187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.571 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.571 job2: (groupid=0, jobs=1): err= 0: pid=1993221: Wed Nov 27 05:54:48 2024 00:33:00.571 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:00.571 slat (nsec): min=7564, max=42199, avg=9287.49, stdev=1470.98 00:33:00.571 clat (usec): min=218, max=615, avg=265.18, stdev=36.57 00:33:00.571 lat (usec): min=228, max=624, avg=274.47, stdev=36.62 00:33:00.571 clat percentiles (usec): 00:33:00.571 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:33:00.571 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:33:00.571 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:33:00.571 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 510], 99.95th=[ 510], 00:33:00.571 | 99.99th=[ 619] 00:33:00.571 write: IOPS=2121, BW=8488KiB/s (8691kB/s)(8496KiB/1001msec); 0 zone resets 00:33:00.571 slat (nsec): min=10667, max=36748, avg=12550.64, stdev=1964.26 00:33:00.571 clat (usec): min=160, max=377, avg=187.38, stdev=16.51 00:33:00.571 lat (usec): min=172, max=389, avg=199.93, stdev=16.81 00:33:00.571 clat percentiles (usec): 00:33:00.571 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:33:00.571 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:33:00.571 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 200], 95.00th=[ 215], 00:33:00.571 | 99.00th=[ 245], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 322], 00:33:00.571 | 99.99th=[ 379] 00:33:00.571 bw ( KiB/s): min= 8192, max= 8192, per=25.21%, avg=8192.00, stdev= 0.00, samples=1 00:33:00.571 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:00.571 lat (usec) : 250=73.97%, 500=25.89%, 750=0.14% 00:33:00.571 cpu : usr=2.90%, sys=7.80%, ctx=4173, majf=0, minf=1 00:33:00.571 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.571 issued rwts: total=2048,2124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.571 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.571 job3: (groupid=0, jobs=1): err= 0: pid=1993223: Wed Nov 27 05:54:48 2024 00:33:00.571 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:33:00.571 slat (nsec): min=7362, max=25167, avg=8684.15, stdev=1810.72 00:33:00.571 clat (usec): min=208, max=41143, avg=669.24, stdev=3798.46 00:33:00.571 lat (usec): min=216, max=41153, avg=677.92, stdev=3799.79 00:33:00.571 clat percentiles (usec): 00:33:00.571 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 249], 00:33:00.571 | 30.00th=[ 262], 40.00th=[ 281], 50.00th=[ 314], 60.00th=[ 330], 00:33:00.571 | 70.00th=[ 343], 80.00th=[ 359], 90.00th=[ 404], 95.00th=[ 453], 00:33:00.571 | 99.00th=[ 510], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:00.571 | 99.99th=[41157] 00:33:00.571 write: IOPS=1507, BW=6030KiB/s (6175kB/s)(6036KiB/1001msec); 0 zone resets 00:33:00.571 slat (nsec): min=9626, max=40408, avg=10998.11, stdev=1923.31 00:33:00.571 clat (usec): min=130, max=382, avg=187.21, stdev=29.68 00:33:00.571 lat (usec): min=141, max=416, avg=198.21, stdev=29.81 00:33:00.571 clat percentiles (usec): 00:33:00.571 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:33:00.571 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 194], 00:33:00.571 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 235], 00:33:00.571 | 99.00th=[ 277], 99.50th=[ 297], 99.90th=[ 330], 99.95th=[ 383], 00:33:00.571 | 99.99th=[ 383] 00:33:00.571 bw ( KiB/s): min= 4096, max= 4096, per=12.60%, avg=4096.00, stdev= 0.00, samples=1 00:33:00.571 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:00.571 lat (usec) : 250=66.64%, 500=32.41%, 750=0.59% 00:33:00.571 lat (msec) : 50=0.36% 00:33:00.571 cpu : usr=2.00%, sys=3.20%, ctx=2534, majf=0, minf=1 00:33:00.571 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.571 issued rwts: total=1024,1509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.571 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.571 00:33:00.571 Run status group 0 (all jobs): 00:33:00.571 READ: bw=28.0MiB/s (29.3MB/s), 4092KiB/s-8192KiB/s (4190kB/s-8389kB/s), io=28.0MiB (29.4MB), run=1000-1001msec 00:33:00.571 WRITE: bw=31.7MiB/s (33.3MB/s), 6030KiB/s-9243KiB/s (6175kB/s-9465kB/s), io=31.8MiB (33.3MB), run=1000-1001msec 00:33:00.571 00:33:00.571 Disk stats (read/write): 00:33:00.571 nvme0n1: ios=1678/2048, merge=0/0, ticks=1349/336, in_queue=1685, util=99.60% 00:33:00.571 nvme0n2: ios=1637/2048, merge=0/0, ticks=438/349, in_queue=787, util=86.69% 00:33:00.571 nvme0n3: ios=1610/2048, merge=0/0, ticks=712/376, in_queue=1088, util=97.91% 00:33:00.571 nvme0n4: ios=832/1024, merge=0/0, ticks=764/184, in_queue=948, util=96.85% 00:33:00.571 05:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:00.571 [global] 00:33:00.571 thread=1 00:33:00.571 invalidate=1 00:33:00.571 rw=write 00:33:00.571 time_based=1 00:33:00.571 runtime=1 00:33:00.571 ioengine=libaio 00:33:00.571 direct=1 00:33:00.571 bs=4096 00:33:00.571 iodepth=128 00:33:00.571 norandommap=0 00:33:00.571 numjobs=1 00:33:00.571 00:33:00.571 verify_dump=1 00:33:00.571 verify_backlog=512 00:33:00.571 verify_state_save=0 00:33:00.571 do_verify=1 00:33:00.571 verify=crc32c-intel 00:33:00.571 [job0] 00:33:00.571 filename=/dev/nvme0n1 00:33:00.571 [job1] 00:33:00.571 filename=/dev/nvme0n2 00:33:00.571 [job2] 00:33:00.571 filename=/dev/nvme0n3 00:33:00.571 [job3] 00:33:00.571 filename=/dev/nvme0n4 00:33:00.571 Could not set queue depth (nvme0n1) 00:33:00.571 Could not set queue depth (nvme0n2) 00:33:00.571 Could not set queue depth (nvme0n3) 00:33:00.571 Could not set queue depth (nvme0n4) 00:33:00.829 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.829 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.829 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.829 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:00.829 fio-3.35 00:33:00.829 Starting 4 threads 00:33:02.201 00:33:02.201 job0: (groupid=0, jobs=1): err= 0: pid=1993591: Wed Nov 27 05:54:49 2024 00:33:02.201 read: IOPS=5462, BW=21.3MiB/s (22.4MB/s)(21.4MiB/1005msec) 00:33:02.201 slat (nsec): min=1292, max=14736k, avg=89312.92, stdev=734667.45 00:33:02.201 clat (usec): min=1013, max=47813, avg=11572.60, stdev=6371.68 00:33:02.201 lat (usec): min=4210, max=47820, avg=11661.91, stdev=6426.46 00:33:02.201 clat percentiles (usec): 00:33:02.201 | 1.00th=[ 6849], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 8848], 00:33:02.201 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:33:02.201 | 70.00th=[10421], 80.00th=[11863], 90.00th=[15795], 95.00th=[22152], 00:33:02.201 | 99.00th=[43779], 99.50th=[47449], 99.90th=[47973], 99.95th=[47973], 00:33:02.201 | 99.99th=[47973] 00:33:02.201 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:33:02.201 slat (nsec): min=2000, max=35243k, avg=81408.12, stdev=718152.16 00:33:02.201 clat (usec): min=2383, max=49452, avg=10873.23, stdev=5917.19 00:33:02.201 lat (usec): min=2394, max=53956, avg=10954.64, stdev=5958.39 00:33:02.201 clat percentiles (usec): 00:33:02.201 | 1.00th=[ 4490], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 8160], 00:33:02.201 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:33:02.201 | 70.00th=[10159], 80.00th=[12387], 90.00th=[14746], 95.00th=[22414], 00:33:02.201 | 99.00th=[48497], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:33:02.201 | 99.99th=[49546] 00:33:02.201 bw ( KiB/s): min=20480, max=24576, per=31.71%, avg=22528.00, stdev=2896.31, samples=2 00:33:02.201 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:33:02.201 lat (msec) : 2=0.01%, 4=0.30%, 10=60.18%, 20=33.90%, 50=5.62% 00:33:02.201 cpu : usr=5.38%, sys=5.78%, ctx=370, majf=0, minf=1 00:33:02.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:02.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.201 issued rwts: total=5490,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.201 job1: (groupid=0, jobs=1): err= 0: pid=1993592: Wed Nov 27 05:54:49 2024 00:33:02.201 read: IOPS=3467, BW=13.5MiB/s (14.2MB/s)(14.2MiB/1052msec) 00:33:02.201 slat (nsec): min=1496, max=44650k, avg=102113.99, stdev=880981.53 00:33:02.201 clat (usec): min=746, max=93106, avg=13865.62, stdev=12324.87 00:33:02.201 lat (usec): min=753, max=93109, avg=13967.74, stdev=12386.42 00:33:02.201 clat percentiles (usec): 00:33:02.201 | 1.00th=[ 1237], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:33:02.201 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:33:02.201 | 70.00th=[11207], 80.00th=[12518], 90.00th=[21627], 95.00th=[21890], 00:33:02.201 | 99.00th=[89654], 99.50th=[91751], 99.90th=[92799], 99.95th=[92799], 00:33:02.201 | 99.99th=[92799] 00:33:02.201 write: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(16.0MiB/1052msec); 0 zone resets 00:33:02.201 slat (usec): min=2, max=23663, avg=148.30, stdev=1052.01 00:33:02.201 clat (usec): min=1756, max=119707, avg=20096.78, stdev=23464.51 00:33:02.201 lat (usec): min=1772, max=119715, avg=20245.08, stdev=23588.32 00:33:02.201 clat percentiles (msec): 00:33:02.201 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:33:02.201 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:33:02.201 | 70.00th=[ 12], 80.00th=[ 21], 90.00th=[ 56], 95.00th=[ 81], 00:33:02.201 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 121], 00:33:02.201 | 99.99th=[ 121] 00:33:02.201 bw ( KiB/s): min= 7592, max=24713, per=22.74%, avg=16152.50, stdev=12106.38, samples=2 00:33:02.201 iops : min= 1898, max= 6178, avg=4038.00, stdev=3026.42, samples=2 00:33:02.201 lat (usec) : 750=0.01%, 1000=0.06% 00:33:02.201 lat (msec) : 2=1.16%, 4=0.36%, 10=23.64%, 20=56.56%, 50=10.54% 00:33:02.201 lat (msec) : 100=6.22%, 250=1.43% 00:33:02.201 cpu : usr=2.85%, sys=3.04%, ctx=491, majf=0, minf=2 00:33:02.201 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:02.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.202 issued rwts: total=3648,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.202 job2: (groupid=0, jobs=1): err= 0: pid=1993593: Wed Nov 27 05:54:49 2024 00:33:02.202 read: IOPS=4306, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1009msec) 00:33:02.202 slat (nsec): min=1426, max=13613k, avg=111869.46, stdev=834578.04 00:33:02.202 clat (usec): min=2532, max=31269, avg=14416.39, stdev=3074.07 00:33:02.202 lat (usec): min=4177, max=31274, avg=14528.26, stdev=3128.59 00:33:02.202 clat percentiles (usec): 00:33:02.202 | 1.00th=[ 7439], 5.00th=[10814], 10.00th=[11731], 20.00th=[12387], 00:33:02.202 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13698], 60.00th=[14222], 00:33:02.202 | 70.00th=[15139], 80.00th=[16450], 90.00th=[17957], 95.00th=[20317], 00:33:02.202 | 99.00th=[25560], 99.50th=[26608], 99.90th=[31327], 99.95th=[31327], 00:33:02.202 | 99.99th=[31327] 00:33:02.202 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:33:02.202 slat (usec): min=2, max=10333, avg=104.89, stdev=679.65 00:33:02.202 clat (usec): min=1813, max=31254, avg=14012.82, stdev=3952.05 00:33:02.202 lat (usec): min=1823, max=31257, avg=14117.71, stdev=4010.23 00:33:02.202 clat percentiles (usec): 00:33:02.202 | 1.00th=[ 5669], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[11076], 00:33:02.202 | 30.00th=[11600], 40.00th=[12780], 50.00th=[13173], 60.00th=[13566], 00:33:02.202 | 70.00th=[14615], 80.00th=[17433], 90.00th=[21103], 95.00th=[21627], 00:33:02.202 | 99.00th=[21890], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:33:02.202 | 99.99th=[31327] 00:33:02.202 bw ( KiB/s): min=18104, max=18760, per=25.95%, avg=18432.00, stdev=463.86, samples=2 00:33:02.202 iops : min= 4526, max= 4690, avg=4608.00, stdev=115.97, samples=2 00:33:02.202 lat (msec) : 2=0.07%, 4=0.20%, 10=5.63%, 20=84.08%, 50=10.02% 00:33:02.202 cpu : usr=3.77%, sys=6.55%, ctx=297, majf=0, minf=1 00:33:02.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:02.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.202 issued rwts: total=4345,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.202 job3: (groupid=0, jobs=1): err= 0: pid=1993594: Wed Nov 27 05:54:49 2024 00:33:02.202 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:33:02.202 slat (nsec): min=1096, max=11871k, avg=102683.19, stdev=650740.30 00:33:02.202 clat (usec): min=1797, max=52797, avg=13683.46, stdev=8381.92 00:33:02.202 lat (usec): min=1800, max=59770, avg=13786.14, stdev=8436.21 00:33:02.202 clat percentiles (usec): 00:33:02.202 | 1.00th=[ 3654], 5.00th=[ 7308], 10.00th=[ 8979], 20.00th=[ 9896], 00:33:02.202 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:33:02.202 | 70.00th=[12125], 80.00th=[13042], 90.00th=[20055], 95.00th=[37487], 00:33:02.202 | 99.00th=[44303], 99.50th=[46924], 99.90th=[51643], 99.95th=[52691], 00:33:02.202 | 99.99th=[52691] 00:33:02.202 write: IOPS=4307, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1009msec); 0 zone resets 00:33:02.202 slat (nsec): min=1804, max=20182k, avg=128499.93, stdev=683474.27 00:33:02.202 clat (usec): min=526, max=59186, avg=15783.88, stdev=13010.76 00:33:02.202 lat (usec): min=575, max=59196, avg=15912.38, stdev=13107.88 00:33:02.202 clat percentiles (usec): 00:33:02.202 | 1.00th=[ 4686], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[10683], 00:33:02.202 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:33:02.202 | 70.00th=[11731], 80.00th=[12649], 90.00th=[43779], 95.00th=[53740], 00:33:02.202 | 99.00th=[56886], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:33:02.202 | 99.99th=[58983] 00:33:02.202 bw ( KiB/s): min=11264, max=22480, per=23.75%, avg=16872.00, stdev=7930.91, samples=2 00:33:02.202 iops : min= 2816, max= 5620, avg=4218.00, stdev=1982.73, samples=2 00:33:02.202 lat (usec) : 750=0.01% 00:33:02.202 lat (msec) : 2=0.14%, 4=0.57%, 10=17.44%, 20=69.72%, 50=7.68% 00:33:02.202 lat (msec) : 100=4.44% 00:33:02.202 cpu : usr=2.18%, sys=4.07%, ctx=487, majf=0, minf=1 00:33:02.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:02.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:02.202 issued rwts: total=4096,4346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:02.202 00:33:02.202 Run status group 0 (all jobs): 00:33:02.202 READ: bw=65.3MiB/s (68.4MB/s), 13.5MiB/s-21.3MiB/s (14.2MB/s-22.4MB/s), io=68.7MiB (72.0MB), run=1005-1052msec 00:33:02.202 WRITE: bw=69.4MiB/s (72.7MB/s), 15.2MiB/s-21.9MiB/s (15.9MB/s-23.0MB/s), io=73.0MiB (76.5MB), run=1005-1052msec 00:33:02.202 00:33:02.202 Disk stats (read/write): 00:33:02.202 nvme0n1: ios=4637/4902, merge=0/0, ticks=43031/39483, in_queue=82514, util=97.09% 00:33:02.202 nvme0n2: ios=3607/3695, merge=0/0, ticks=12693/18438, in_queue=31131, util=98.17% 00:33:02.202 nvme0n3: ios=3643/4020, merge=0/0, ticks=36657/36803, in_queue=73460, util=98.02% 00:33:02.202 nvme0n4: ios=3131/3439, merge=0/0, ticks=13920/18029, in_queue=31949, util=97.70% 00:33:02.202 05:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:02.202 [global] 00:33:02.202 thread=1 00:33:02.202 invalidate=1 00:33:02.202 rw=randwrite 00:33:02.202 time_based=1 00:33:02.202 runtime=1 00:33:02.202 ioengine=libaio 00:33:02.202 direct=1 00:33:02.202 bs=4096 00:33:02.202 iodepth=128 00:33:02.202 norandommap=0 00:33:02.202 numjobs=1 00:33:02.202 00:33:02.202 verify_dump=1 00:33:02.202 verify_backlog=512 00:33:02.202 verify_state_save=0 00:33:02.202 do_verify=1 00:33:02.202 verify=crc32c-intel 00:33:02.202 [job0] 00:33:02.202 filename=/dev/nvme0n1 00:33:02.202 [job1] 00:33:02.202 filename=/dev/nvme0n2 00:33:02.202 [job2] 00:33:02.202 filename=/dev/nvme0n3 00:33:02.202 [job3] 00:33:02.202 filename=/dev/nvme0n4 00:33:02.202 Could not set queue depth (nvme0n1) 00:33:02.202 Could not set queue depth (nvme0n2) 00:33:02.202 Could not set queue depth (nvme0n3) 00:33:02.202 Could not set queue depth (nvme0n4) 00:33:02.461 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.461 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.461 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.461 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:02.461 fio-3.35 00:33:02.461 Starting 4 threads 00:33:03.863 00:33:03.863 job0: (groupid=0, jobs=1): err= 0: pid=1993962: Wed Nov 27 05:54:51 2024 00:33:03.863 read: IOPS=5966, BW=23.3MiB/s (24.4MB/s)(23.5MiB/1008msec) 00:33:03.863 slat (nsec): min=1093, max=13863k, avg=83538.45, stdev=607631.42 00:33:03.863 clat (usec): min=1124, max=38070, avg=10661.96, stdev=4905.92 00:33:03.863 lat (usec): min=4895, max=38095, avg=10745.50, stdev=4951.06 00:33:03.863 clat percentiles (usec): 00:33:03.863 | 1.00th=[ 5473], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7242], 00:33:03.863 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9503], 00:33:03.863 | 70.00th=[10683], 80.00th=[14615], 90.00th=[19268], 95.00th=[21627], 00:33:03.863 | 99.00th=[25822], 99.50th=[25822], 99.90th=[27919], 99.95th=[27919], 00:33:03.863 | 99.99th=[38011] 00:33:03.863 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:33:03.863 slat (nsec): min=1747, max=17636k, avg=76227.19, stdev=532088.01 00:33:03.863 clat (usec): min=617, max=36489, avg=10360.73, stdev=4471.18 00:33:03.863 lat (usec): min=636, max=36520, avg=10436.96, stdev=4509.20 00:33:03.863 clat percentiles (usec): 00:33:03.863 | 1.00th=[ 4948], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 7963], 00:33:03.863 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 9241], 00:33:03.863 | 70.00th=[10552], 80.00th=[11469], 90.00th=[17171], 95.00th=[19792], 00:33:03.863 | 99.00th=[30016], 99.50th=[30016], 99.90th=[34866], 99.95th=[34866], 00:33:03.863 | 99.99th=[36439] 00:33:03.863 bw ( KiB/s): min=21216, max=27936, per=36.37%, avg=24576.00, stdev=4751.76, samples=2 00:33:03.863 iops : min= 5304, max= 6984, avg=6144.00, stdev=1187.94, samples=2 00:33:03.863 lat (usec) : 750=0.07% 00:33:03.863 lat (msec) : 2=0.02%, 4=0.12%, 10=65.56%, 20=28.99%, 50=5.24% 00:33:03.863 cpu : usr=3.48%, sys=5.86%, ctx=641, majf=0, minf=1 00:33:03.863 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:03.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.863 issued rwts: total=6014,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.864 job1: (groupid=0, jobs=1): err= 0: pid=1993963: Wed Nov 27 05:54:51 2024 00:33:03.864 read: IOPS=2536, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:33:03.864 slat (nsec): min=1104, max=15521k, avg=186700.99, stdev=1244177.99 00:33:03.864 clat (usec): min=5758, max=82854, avg=22768.95, stdev=14885.02 00:33:03.864 lat (usec): min=7584, max=82862, avg=22955.65, stdev=14985.63 00:33:03.864 clat percentiles (usec): 00:33:03.864 | 1.00th=[ 9372], 5.00th=[11600], 10.00th=[12518], 20.00th=[12911], 00:33:03.864 | 30.00th=[13042], 40.00th=[13435], 50.00th=[14746], 60.00th=[16319], 00:33:03.864 | 70.00th=[25035], 80.00th=[36963], 90.00th=[46400], 95.00th=[51119], 00:33:03.864 | 99.00th=[72877], 99.50th=[78119], 99.90th=[83362], 99.95th=[83362], 00:33:03.864 | 99.99th=[83362] 00:33:03.864 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:33:03.864 slat (nsec): min=1892, max=18076k, avg=156886.61, stdev=789866.93 00:33:03.864 clat (usec): min=1307, max=82838, avg=22520.73, stdev=18294.42 00:33:03.864 lat (usec): min=1314, max=82847, avg=22677.62, stdev=18403.31 00:33:03.864 clat percentiles (usec): 00:33:03.864 | 1.00th=[ 2311], 5.00th=[ 2933], 10.00th=[ 7308], 20.00th=[ 9765], 00:33:03.864 | 30.00th=[10683], 40.00th=[11600], 50.00th=[12125], 60.00th=[18482], 00:33:03.864 | 70.00th=[26608], 80.00th=[45351], 90.00th=[51643], 95.00th=[59507], 00:33:03.864 | 99.00th=[68682], 99.50th=[71828], 99.90th=[72877], 99.95th=[82314], 00:33:03.864 | 99.99th=[83362] 00:33:03.864 bw ( KiB/s): min=11320, max=12288, per=17.47%, avg=11804.00, stdev=684.48, samples=2 00:33:03.864 iops : min= 2830, max= 3072, avg=2951.00, stdev=171.12, samples=2 00:33:03.864 lat (msec) : 2=0.14%, 4=4.56%, 10=9.38%, 20=51.02%, 50=24.97% 00:33:03.864 lat (msec) : 100=9.93% 00:33:03.864 cpu : usr=2.67%, sys=3.17%, ctx=325, majf=0, minf=1 00:33:03.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:03.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.864 issued rwts: total=2567,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.864 job2: (groupid=0, jobs=1): err= 0: pid=1993966: Wed Nov 27 05:54:51 2024 00:33:03.864 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:33:03.864 slat (nsec): min=1622, max=18351k, avg=107215.75, stdev=850184.98 00:33:03.864 clat (usec): min=4098, max=53302, avg=14162.63, stdev=6905.49 00:33:03.864 lat (usec): min=4104, max=53327, avg=14269.85, stdev=6980.60 00:33:03.864 clat percentiles (usec): 00:33:03.864 | 1.00th=[ 4228], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8979], 00:33:03.864 | 30.00th=[10028], 40.00th=[10814], 50.00th=[12387], 60.00th=[13173], 00:33:03.864 | 70.00th=[15270], 80.00th=[18482], 90.00th=[22938], 95.00th=[31065], 00:33:03.864 | 99.00th=[35914], 99.50th=[35914], 99.90th=[40633], 99.95th=[47973], 00:33:03.864 | 99.99th=[53216] 00:33:03.864 write: IOPS=4868, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1005msec); 0 zone resets 00:33:03.864 slat (usec): min=2, max=14031, avg=96.48, stdev=725.32 00:33:03.864 clat (usec): min=561, max=42105, avg=12679.62, stdev=4798.03 00:33:03.864 lat (usec): min=2296, max=42136, avg=12776.10, stdev=4870.66 00:33:03.864 clat percentiles (usec): 00:33:03.864 | 1.00th=[ 4752], 5.00th=[ 7701], 10.00th=[ 8848], 20.00th=[ 9503], 00:33:03.864 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11994], 60.00th=[12518], 00:33:03.864 | 70.00th=[12911], 80.00th=[14091], 90.00th=[19268], 95.00th=[25822], 00:33:03.864 | 99.00th=[30016], 99.50th=[30016], 99.90th=[31851], 99.95th=[35914], 00:33:03.864 | 99.99th=[42206] 00:33:03.864 bw ( KiB/s): min=17640, max=20480, per=28.20%, avg=19060.00, stdev=2008.18, samples=2 00:33:03.864 iops : min= 4410, max= 5120, avg=4765.00, stdev=502.05, samples=2 00:33:03.864 lat (usec) : 750=0.01% 00:33:03.864 lat (msec) : 4=0.14%, 10=29.30%, 20=59.50%, 50=11.04%, 100=0.01% 00:33:03.864 cpu : usr=3.29%, sys=6.77%, ctx=337, majf=0, minf=1 00:33:03.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:33:03.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.864 issued rwts: total=4608,4893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.864 job3: (groupid=0, jobs=1): err= 0: pid=1993967: Wed Nov 27 05:54:51 2024 00:33:03.864 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:33:03.864 slat (usec): min=2, max=14504, avg=152.36, stdev=1017.56 00:33:03.864 clat (usec): min=6137, max=75253, avg=19913.55, stdev=8888.39 00:33:03.864 lat (usec): min=6142, max=85107, avg=20065.91, stdev=8965.51 00:33:03.864 clat percentiles (usec): 00:33:03.864 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[13698], 00:33:03.864 | 30.00th=[15795], 40.00th=[17433], 50.00th=[18744], 60.00th=[19792], 00:33:03.864 | 70.00th=[20841], 80.00th=[24511], 90.00th=[30016], 95.00th=[32900], 00:33:03.864 | 99.00th=[53216], 99.50th=[66323], 99.90th=[74974], 99.95th=[74974], 00:33:03.864 | 99.99th=[74974] 00:33:03.864 write: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1007msec); 0 zone resets 00:33:03.864 slat (nsec): min=1835, max=18790k, avg=199371.32, stdev=1144388.65 00:33:03.864 clat (usec): min=1461, max=109340, avg=25553.89, stdev=19803.13 00:33:03.864 lat (msec): min=6, max=109, avg=25.75, stdev=19.94 00:33:03.864 clat percentiles (msec): 00:33:03.864 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:33:03.864 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 20], 00:33:03.864 | 70.00th=[ 24], 80.00th=[ 40], 90.00th=[ 52], 95.00th=[ 64], 00:33:03.864 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 110], 99.95th=[ 110], 00:33:03.864 | 99.99th=[ 110] 00:33:03.864 bw ( KiB/s): min=10480, max=12408, per=16.93%, avg=11444.00, stdev=1363.30, samples=2 00:33:03.864 iops : min= 2620, max= 3102, avg=2861.00, stdev=340.83, samples=2 00:33:03.864 lat (msec) : 2=0.02%, 10=6.52%, 20=55.51%, 50=30.35%, 100=6.70% 00:33:03.864 lat (msec) : 250=0.90% 00:33:03.864 cpu : usr=2.09%, sys=2.98%, ctx=222, majf=0, minf=1 00:33:03.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:03.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:03.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:03.864 issued rwts: total=2560,2989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:03.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:03.864 00:33:03.864 Run status group 0 (all jobs): 00:33:03.864 READ: bw=60.8MiB/s (63.7MB/s), 9.91MiB/s-23.3MiB/s (10.4MB/s-24.4MB/s), io=61.5MiB (64.5MB), run=1005-1012msec 00:33:03.864 WRITE: bw=66.0MiB/s (69.2MB/s), 11.6MiB/s-23.8MiB/s (12.2MB/s-25.0MB/s), io=66.8MiB (70.0MB), run=1005-1012msec 00:33:03.864 00:33:03.864 Disk stats (read/write): 00:33:03.864 nvme0n1: ios=5170/5600, merge=0/0, ticks=26986/27317, in_queue=54303, util=86.87% 00:33:03.864 nvme0n2: ios=2074/2183, merge=0/0, ticks=29297/37640, in_queue=66937, util=98.68% 00:33:03.864 nvme0n3: ios=3795/4096, merge=0/0, ticks=28984/28190, in_queue=57174, util=88.94% 00:33:03.864 nvme0n4: ios=2106/2231, merge=0/0, ticks=20437/34277, in_queue=54714, util=98.00% 00:33:03.864 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:03.864 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1994199 00:33:03.864 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:03.864 05:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:03.864 [global] 00:33:03.864 thread=1 00:33:03.864 invalidate=1 00:33:03.864 rw=read 00:33:03.864 time_based=1 00:33:03.864 runtime=10 00:33:03.864 ioengine=libaio 00:33:03.864 direct=1 00:33:03.864 bs=4096 00:33:03.864 iodepth=1 00:33:03.864 norandommap=1 00:33:03.864 numjobs=1 00:33:03.864 00:33:03.864 [job0] 00:33:03.864 filename=/dev/nvme0n1 00:33:03.864 [job1] 00:33:03.864 filename=/dev/nvme0n2 00:33:03.864 [job2] 00:33:03.864 filename=/dev/nvme0n3 00:33:03.864 [job3] 00:33:03.864 filename=/dev/nvme0n4 00:33:03.864 Could not set queue depth (nvme0n1) 00:33:03.864 Could not set queue depth (nvme0n2) 00:33:03.864 Could not set queue depth (nvme0n3) 00:33:03.864 Could not set queue depth (nvme0n4) 00:33:04.128 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.128 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.128 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.128 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:04.128 fio-3.35 00:33:04.128 Starting 4 threads 00:33:06.648 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:06.957 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:06.957 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=331776, buflen=4096 00:33:06.957 fio: pid=1994338, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.245 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.245 05:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:07.245 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12886016, buflen=4096 00:33:07.245 fio: pid=1994337, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.245 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54558720, buflen=4096 00:33:07.245 fio: pid=1994335, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.245 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.245 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:07.517 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.517 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:07.517 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=622592, buflen=4096 00:33:07.517 fio: pid=1994336, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:07.517 00:33:07.517 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1994335: Wed Nov 27 05:54:55 2024 00:33:07.517 read: IOPS=4309, BW=16.8MiB/s (17.7MB/s)(52.0MiB/3091msec) 00:33:07.517 slat (usec): min=6, max=11632, avg= 9.75, stdev=128.74 00:33:07.517 clat (usec): min=155, max=41434, avg=218.66, stdev=955.60 00:33:07.517 lat (usec): min=168, max=44887, avg=228.42, stdev=976.72 00:33:07.517 clat percentiles (usec): 00:33:07.517 | 1.00th=[ 178], 5.00th=[ 180], 10.00th=[ 180], 20.00th=[ 182], 00:33:07.517 | 30.00th=[ 184], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:33:07.517 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 249], 00:33:07.517 | 99.00th=[ 326], 99.50th=[ 424], 99.90th=[ 537], 99.95th=[41157], 00:33:07.517 | 99.99th=[41157] 00:33:07.517 bw ( KiB/s): min= 8555, max=20552, per=87.74%, avg=17604.50, stdev=4801.46, samples=6 00:33:07.517 iops : min= 2138, max= 5138, avg=4401.00, stdev=1200.65, samples=6 00:33:07.517 lat (usec) : 250=95.53%, 500=4.26%, 750=0.14% 00:33:07.517 lat (msec) : 2=0.01%, 50=0.06% 00:33:07.517 cpu : usr=2.43%, sys=6.54%, ctx=13325, majf=0, minf=2 00:33:07.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.517 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.517 issued rwts: total=13321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.518 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1994336: Wed Nov 27 05:54:55 2024 00:33:07.518 read: IOPS=45, BW=183KiB/s (187kB/s)(608KiB/3329msec) 00:33:07.518 slat (usec): min=3, max=25783, avg=316.26, stdev=2622.43 00:33:07.518 clat (usec): min=186, max=42052, avg=21415.51, stdev=20401.04 00:33:07.518 lat (usec): min=196, max=66920, avg=21733.68, stdev=20858.28 00:33:07.518 clat percentiles (usec): 00:33:07.518 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 212], 20.00th=[ 227], 00:33:07.518 | 30.00th=[ 262], 40.00th=[ 343], 50.00th=[40633], 60.00th=[40633], 00:33:07.518 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:33:07.518 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:07.518 | 99.99th=[42206] 00:33:07.518 bw ( KiB/s): min= 136, max= 240, per=0.94%, avg=189.50, stdev=48.22, samples=6 00:33:07.518 iops : min= 34, max= 60, avg=47.33, stdev=12.11, samples=6 00:33:07.518 lat (usec) : 250=24.84%, 500=22.22%, 1000=0.65% 00:33:07.518 lat (msec) : 50=51.63% 00:33:07.518 cpu : usr=0.00%, sys=0.18%, ctx=157, majf=0, minf=2 00:33:07.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.518 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.518 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.518 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1994337: Wed Nov 27 05:54:55 2024 00:33:07.518 read: IOPS=1084, BW=4336KiB/s (4440kB/s)(12.3MiB/2902msec) 00:33:07.518 slat (usec): min=6, max=15827, avg=12.79, stdev=282.01 00:33:07.518 clat (usec): min=180, max=42224, avg=900.61, stdev=5260.38 00:33:07.518 lat (usec): min=188, max=58051, avg=913.40, stdev=5309.48 00:33:07.518 clat percentiles (usec): 00:33:07.518 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:33:07.518 | 30.00th=[ 208], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 212], 00:33:07.518 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 225], 95.00th=[ 235], 00:33:07.518 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:33:07.518 | 99.99th=[42206] 00:33:07.518 bw ( KiB/s): min= 96, max=18008, per=25.01%, avg=5019.20, stdev=7818.04, samples=5 00:33:07.518 iops : min= 24, max= 4502, avg=1254.80, stdev=1954.51, samples=5 00:33:07.518 lat (usec) : 250=96.76%, 500=1.49%, 750=0.03% 00:33:07.518 lat (msec) : 50=1.68% 00:33:07.518 cpu : usr=0.31%, sys=1.03%, ctx=3150, majf=0, minf=1 00:33:07.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.518 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.518 issued rwts: total=3147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.518 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1994338: Wed Nov 27 05:54:55 2024 00:33:07.518 read: IOPS=30, BW=120KiB/s (123kB/s)(324KiB/2699msec) 00:33:07.518 slat (nsec): min=3489, max=31475, avg=12327.16, stdev=5935.67 00:33:07.518 clat (usec): min=232, max=42076, avg=33037.59, stdev=16305.39 00:33:07.518 lat (usec): min=249, max=42103, avg=33049.80, stdev=16305.46 00:33:07.518 clat percentiles (usec): 00:33:07.518 | 1.00th=[ 233], 5.00th=[ 277], 10.00th=[ 338], 20.00th=[40633], 00:33:07.518 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:07.518 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:33:07.518 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:07.518 | 99.99th=[42206] 00:33:07.518 bw ( KiB/s): min= 96, max= 224, per=0.60%, avg=121.60, stdev=57.24, samples=5 00:33:07.518 iops : min= 24, max= 56, avg=30.40, stdev=14.31, samples=5 00:33:07.518 lat (usec) : 250=1.22%, 500=15.85%, 750=2.44% 00:33:07.518 lat (msec) : 50=79.27% 00:33:07.518 cpu : usr=0.00%, sys=0.04%, ctx=82, majf=0, minf=2 00:33:07.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:07.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.518 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:07.518 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:07.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:07.518 00:33:07.518 Run status group 0 (all jobs): 00:33:07.518 READ: bw=19.6MiB/s (20.5MB/s), 120KiB/s-16.8MiB/s (123kB/s-17.7MB/s), io=65.2MiB (68.4MB), run=2699-3329msec 00:33:07.518 00:33:07.518 Disk stats (read/write): 00:33:07.518 nvme0n1: ios=13343/0, merge=0/0, ticks=2888/0, in_queue=2888, util=98.15% 00:33:07.518 nvme0n2: ios=186/0, merge=0/0, ticks=3811/0, in_queue=3811, util=98.53% 00:33:07.518 nvme0n3: ios=3183/0, merge=0/0, ticks=3221/0, in_queue=3221, util=99.59% 00:33:07.518 nvme0n4: ios=78/0, merge=0/0, ticks=2552/0, in_queue=2552, util=96.38% 00:33:07.803 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:07.803 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:08.088 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.088 05:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:08.088 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.088 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:08.347 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:08.347 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1994199 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:08.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:08.606 nvmf hotplug test: fio failed as expected 00:33:08.606 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:08.866 rmmod nvme_tcp 00:33:08.866 rmmod nvme_fabrics 00:33:08.866 rmmod nvme_keyring 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1991724 ']' 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1991724 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1991724 ']' 00:33:08.866 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1991724 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1991724 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1991724' 00:33:09.126 killing process with pid 1991724 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1991724 00:33:09.126 05:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1991724 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.126 05:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.665 00:33:11.665 real 0m25.962s 00:33:11.665 user 1m30.347s 00:33:11.665 sys 0m11.273s 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:11.665 ************************************ 00:33:11.665 END TEST nvmf_fio_target 00:33:11.665 ************************************ 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:11.665 ************************************ 00:33:11.665 START TEST nvmf_bdevio 00:33:11.665 ************************************ 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:11.665 * Looking for test storage... 00:33:11.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:11.665 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.666 --rc genhtml_branch_coverage=1 00:33:11.666 --rc genhtml_function_coverage=1 00:33:11.666 --rc genhtml_legend=1 00:33:11.666 --rc geninfo_all_blocks=1 00:33:11.666 --rc geninfo_unexecuted_blocks=1 00:33:11.666 00:33:11.666 ' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.666 --rc genhtml_branch_coverage=1 00:33:11.666 --rc genhtml_function_coverage=1 00:33:11.666 --rc genhtml_legend=1 00:33:11.666 --rc geninfo_all_blocks=1 00:33:11.666 --rc geninfo_unexecuted_blocks=1 00:33:11.666 00:33:11.666 ' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.666 --rc genhtml_branch_coverage=1 00:33:11.666 --rc genhtml_function_coverage=1 00:33:11.666 --rc genhtml_legend=1 00:33:11.666 --rc geninfo_all_blocks=1 00:33:11.666 --rc geninfo_unexecuted_blocks=1 00:33:11.666 00:33:11.666 ' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.666 --rc genhtml_branch_coverage=1 00:33:11.666 --rc genhtml_function_coverage=1 00:33:11.666 --rc genhtml_legend=1 00:33:11.666 --rc geninfo_all_blocks=1 00:33:11.666 --rc geninfo_unexecuted_blocks=1 00:33:11.666 00:33:11.666 ' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.666 05:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:18.250 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:18.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:18.250 Found net devices under 0000:86:00.0: cvl_0_0 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:18.250 Found net devices under 0000:86:00.1: cvl_0_1 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.250 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:33:18.251 00:33:18.251 --- 10.0.0.2 ping statistics --- 00:33:18.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.251 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:33:18.251 00:33:18.251 --- 10.0.0.1 ping statistics --- 00:33:18.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.251 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1998670 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1998670 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1998670 ']' 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:18.251 05:55:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.251 [2024-11-27 05:55:05.420606] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:18.251 [2024-11-27 05:55:05.421535] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:33:18.251 [2024-11-27 05:55:05.421571] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:18.251 [2024-11-27 05:55:05.500931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:18.251 [2024-11-27 05:55:05.543489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:18.251 [2024-11-27 05:55:05.543530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:18.251 [2024-11-27 05:55:05.543537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:18.251 [2024-11-27 05:55:05.543544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:18.251 [2024-11-27 05:55:05.543548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:18.251 [2024-11-27 05:55:05.545117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:18.251 [2024-11-27 05:55:05.545225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:18.251 [2024-11-27 05:55:05.545352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:18.251 [2024-11-27 05:55:05.545354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:18.251 [2024-11-27 05:55:05.614019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:18.251 [2024-11-27 05:55:05.614955] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:18.251 [2024-11-27 05:55:05.615045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:18.251 [2024-11-27 05:55:05.615290] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:18.251 [2024-11-27 05:55:05.615349] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.511 [2024-11-27 05:55:06.302134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.511 Malloc0 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:18.511 [2024-11-27 05:55:06.386342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:18.511 { 00:33:18.511 "params": { 00:33:18.511 "name": "Nvme$subsystem", 00:33:18.511 "trtype": "$TEST_TRANSPORT", 00:33:18.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:18.511 "adrfam": "ipv4", 00:33:18.511 "trsvcid": "$NVMF_PORT", 00:33:18.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:18.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:18.511 "hdgst": ${hdgst:-false}, 00:33:18.511 "ddgst": ${ddgst:-false} 00:33:18.511 }, 00:33:18.511 "method": "bdev_nvme_attach_controller" 00:33:18.511 } 00:33:18.511 EOF 00:33:18.511 )") 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:18.511 05:55:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:18.511 "params": { 00:33:18.511 "name": "Nvme1", 00:33:18.511 "trtype": "tcp", 00:33:18.511 "traddr": "10.0.0.2", 00:33:18.511 "adrfam": "ipv4", 00:33:18.511 "trsvcid": "4420", 00:33:18.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:18.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:18.511 "hdgst": false, 00:33:18.511 "ddgst": false 00:33:18.511 }, 00:33:18.511 "method": "bdev_nvme_attach_controller" 00:33:18.511 }' 00:33:18.511 [2024-11-27 05:55:06.438172] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:33:18.511 [2024-11-27 05:55:06.438220] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998835 ] 00:33:18.771 [2024-11-27 05:55:06.515932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:18.771 [2024-11-27 05:55:06.559505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.771 [2024-11-27 05:55:06.559617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.771 [2024-11-27 05:55:06.559617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:18.771 I/O targets: 00:33:18.771 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:18.771 00:33:18.771 00:33:18.771 CUnit - A unit testing framework for C - Version 2.1-3 00:33:18.771 http://cunit.sourceforge.net/ 00:33:18.771 00:33:18.771 00:33:18.771 Suite: bdevio tests on: Nvme1n1 00:33:18.771 Test: blockdev write read block ...passed 00:33:19.030 Test: blockdev write zeroes read block ...passed 00:33:19.030 Test: blockdev write zeroes read no split ...passed 00:33:19.030 Test: blockdev write zeroes read split ...passed 00:33:19.030 Test: blockdev write zeroes read split partial ...passed 00:33:19.030 Test: blockdev reset ...[2024-11-27 05:55:06.863236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:19.030 [2024-11-27 05:55:06.863299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a8350 (9): Bad file descriptor 00:33:19.030 [2024-11-27 05:55:06.866539] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:19.030 passed 00:33:19.030 Test: blockdev write read 8 blocks ...passed 00:33:19.030 Test: blockdev write read size > 128k ...passed 00:33:19.030 Test: blockdev write read invalid size ...passed 00:33:19.030 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:19.030 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:19.030 Test: blockdev write read max offset ...passed 00:33:19.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:19.289 Test: blockdev writev readv 8 blocks ...passed 00:33:19.289 Test: blockdev writev readv 30 x 1block ...passed 00:33:19.289 Test: blockdev writev readv block ...passed 00:33:19.289 Test: blockdev writev readv size > 128k ...passed 00:33:19.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:19.289 Test: blockdev comparev and writev ...[2024-11-27 05:55:07.160722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.160749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.160763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.160771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.161063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.161075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.161087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.161098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.161418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.161429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.161443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.161451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.161751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.161762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.161773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:19.289 [2024-11-27 05:55:07.161780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:19.289 passed 00:33:19.289 Test: blockdev nvme passthru rw ...passed 00:33:19.289 Test: blockdev nvme passthru vendor specific ...[2024-11-27 05:55:07.244069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.289 [2024-11-27 05:55:07.244085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.244192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.289 [2024-11-27 05:55:07.244202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.244305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.289 [2024-11-27 05:55:07.244314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:19.289 [2024-11-27 05:55:07.244421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:19.289 [2024-11-27 05:55:07.244430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:19.289 passed 00:33:19.289 Test: blockdev nvme admin passthru ...passed 00:33:19.549 Test: blockdev copy ...passed 00:33:19.549 00:33:19.549 Run Summary: Type Total Ran Passed Failed Inactive 00:33:19.549 suites 1 1 n/a 0 0 00:33:19.549 tests 23 23 23 0 0 00:33:19.549 asserts 152 152 152 0 n/a 00:33:19.549 00:33:19.549 Elapsed time = 1.192 seconds 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:19.549 rmmod nvme_tcp 00:33:19.549 rmmod nvme_fabrics 00:33:19.549 rmmod nvme_keyring 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1998670 ']' 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1998670 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1998670 ']' 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1998670 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.549 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1998670 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1998670' 00:33:19.809 killing process with pid 1998670 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1998670 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1998670 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.809 05:55:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.345 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.345 00:33:22.345 real 0m10.581s 00:33:22.346 user 0m8.795s 00:33:22.346 sys 0m5.248s 00:33:22.346 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.346 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:22.346 ************************************ 00:33:22.346 END TEST nvmf_bdevio 00:33:22.346 ************************************ 00:33:22.346 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:22.346 00:33:22.346 real 4m34.999s 00:33:22.346 user 9m3.256s 00:33:22.346 sys 1m52.920s 00:33:22.346 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.346 05:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:22.346 ************************************ 00:33:22.346 END TEST nvmf_target_core_interrupt_mode 00:33:22.346 ************************************ 00:33:22.346 05:55:09 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:22.346 05:55:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:22.346 05:55:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.346 05:55:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.346 ************************************ 00:33:22.346 START TEST nvmf_interrupt 00:33:22.346 ************************************ 00:33:22.346 05:55:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:22.346 * Looking for test storage... 00:33:22.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.346 --rc genhtml_branch_coverage=1 00:33:22.346 --rc genhtml_function_coverage=1 00:33:22.346 --rc genhtml_legend=1 00:33:22.346 --rc geninfo_all_blocks=1 00:33:22.346 --rc geninfo_unexecuted_blocks=1 00:33:22.346 00:33:22.346 ' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.346 --rc genhtml_branch_coverage=1 00:33:22.346 --rc genhtml_function_coverage=1 00:33:22.346 --rc genhtml_legend=1 00:33:22.346 --rc geninfo_all_blocks=1 00:33:22.346 --rc geninfo_unexecuted_blocks=1 00:33:22.346 00:33:22.346 ' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.346 --rc genhtml_branch_coverage=1 00:33:22.346 --rc genhtml_function_coverage=1 00:33:22.346 --rc genhtml_legend=1 00:33:22.346 --rc geninfo_all_blocks=1 00:33:22.346 --rc geninfo_unexecuted_blocks=1 00:33:22.346 00:33:22.346 ' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:22.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.346 --rc genhtml_branch_coverage=1 00:33:22.346 --rc genhtml_function_coverage=1 00:33:22.346 --rc genhtml_legend=1 00:33:22.346 --rc geninfo_all_blocks=1 00:33:22.346 --rc geninfo_unexecuted_blocks=1 00:33:22.346 00:33:22.346 ' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:22.346 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:22.347 05:55:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:28.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:28.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:28.921 Found net devices under 0000:86:00.0: cvl_0_0 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:28.921 Found net devices under 0000:86:00.1: cvl_0_1 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:28.921 05:55:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:28.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:33:28.921 00:33:28.921 --- 10.0.0.2 ping statistics --- 00:33:28.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.921 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:33:28.921 00:33:28.921 --- 10.0.0.1 ping statistics --- 00:33:28.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.921 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:28.921 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2002602 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2002602 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2002602 ']' 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 [2024-11-27 05:55:16.148724] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:28.922 [2024-11-27 05:55:16.149623] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:33:28.922 [2024-11-27 05:55:16.149657] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.922 [2024-11-27 05:55:16.225266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:28.922 [2024-11-27 05:55:16.266615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.922 [2024-11-27 05:55:16.266652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.922 [2024-11-27 05:55:16.266659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.922 [2024-11-27 05:55:16.266665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.922 [2024-11-27 05:55:16.266680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.922 [2024-11-27 05:55:16.267799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.922 [2024-11-27 05:55:16.267802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.922 [2024-11-27 05:55:16.334747] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:28.922 [2024-11-27 05:55:16.335122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:28.922 [2024-11-27 05:55:16.335416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:28.922 5000+0 records in 00:33:28.922 5000+0 records out 00:33:28.922 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0165985 s, 617 MB/s 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 AIO0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 [2024-11-27 05:55:16.460537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:28.922 [2024-11-27 05:55:16.496897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2002602 0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2002602 0 idle 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002602 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.24 reactor_0' 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002602 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.24 reactor_0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2002602 1 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2002602 1 idle 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002606 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002606 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:28.922 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2002642 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2002602 0 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2002602 0 busy 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:28.923 05:55:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002602 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.25 reactor_0' 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002602 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.25 reactor_0 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:29.182 05:55:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:33:30.118 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:33:30.118 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:30.118 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:30.118 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002602 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.56 reactor_0' 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002602 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.56 reactor_0 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2002602 1 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2002602 1 busy 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:30.377 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002606 root 20 0 128.2g 47616 34560 R 87.5 0.0 0:01.33 reactor_1' 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002606 root 20 0 128.2g 47616 34560 R 87.5 0.0 0:01.33 reactor_1 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:30.636 05:55:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2002642 00:33:40.618 Initializing NVMe Controllers 00:33:40.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:40.618 Controller IO queue size 256, less than required. 00:33:40.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:40.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:40.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:40.619 Initialization complete. Launching workers. 00:33:40.619 ======================================================== 00:33:40.619 Latency(us) 00:33:40.619 Device Information : IOPS MiB/s Average min max 00:33:40.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16558.60 64.68 15468.04 2811.74 29568.50 00:33:40.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16371.60 63.95 15641.55 7535.42 56537.08 00:33:40.619 ======================================================== 00:33:40.619 Total : 32930.20 128.63 15554.30 2811.74 56537.08 00:33:40.619 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2002602 0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2002602 0 idle 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002602 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0' 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002602 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.25 reactor_0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2002602 1 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2002602 1 idle 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002606 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002606 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:40.619 05:55:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2002602 0 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2002602 0 idle 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:42.000 05:55:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002602 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.49 reactor_0' 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002602 root 20 0 128.2g 73728 34560 S 6.7 0.0 0:20.49 reactor_0 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2002602 1 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2002602 1 idle 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2002602 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2002602 -w 256 00:33:42.259 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2002606 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1' 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2002606 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.08 reactor_1 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:42.519 05:55:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:42.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:42.778 rmmod nvme_tcp 00:33:42.778 rmmod nvme_fabrics 00:33:42.778 rmmod nvme_keyring 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2002602 ']' 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2002602 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2002602 ']' 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2002602 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2002602 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2002602' 00:33:42.778 killing process with pid 2002602 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2002602 00:33:42.778 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2002602 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:43.038 05:55:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.945 05:55:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:44.945 00:33:44.945 real 0m23.005s 00:33:44.945 user 0m39.813s 00:33:44.945 sys 0m8.465s 00:33:44.945 05:55:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:44.945 05:55:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:44.945 ************************************ 00:33:44.945 END TEST nvmf_interrupt 00:33:44.945 ************************************ 00:33:45.205 00:33:45.205 real 27m29.421s 00:33:45.205 user 56m41.358s 00:33:45.205 sys 9m16.738s 00:33:45.205 05:55:32 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.205 05:55:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.205 ************************************ 00:33:45.205 END TEST nvmf_tcp 00:33:45.205 ************************************ 00:33:45.205 05:55:33 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:45.205 05:55:33 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:45.205 05:55:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:45.205 05:55:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.205 05:55:33 -- common/autotest_common.sh@10 -- # set +x 00:33:45.205 ************************************ 00:33:45.205 START TEST spdkcli_nvmf_tcp 00:33:45.205 ************************************ 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:45.205 * Looking for test storage... 00:33:45.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:45.205 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.465 --rc genhtml_branch_coverage=1 00:33:45.465 --rc genhtml_function_coverage=1 00:33:45.465 --rc genhtml_legend=1 00:33:45.465 --rc geninfo_all_blocks=1 00:33:45.465 --rc geninfo_unexecuted_blocks=1 00:33:45.465 00:33:45.465 ' 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.465 --rc genhtml_branch_coverage=1 00:33:45.465 --rc genhtml_function_coverage=1 00:33:45.465 --rc genhtml_legend=1 00:33:45.465 --rc geninfo_all_blocks=1 00:33:45.465 --rc geninfo_unexecuted_blocks=1 00:33:45.465 00:33:45.465 ' 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.465 --rc genhtml_branch_coverage=1 00:33:45.465 --rc genhtml_function_coverage=1 00:33:45.465 --rc genhtml_legend=1 00:33:45.465 --rc geninfo_all_blocks=1 00:33:45.465 --rc geninfo_unexecuted_blocks=1 00:33:45.465 00:33:45.465 ' 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:45.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:45.465 --rc genhtml_branch_coverage=1 00:33:45.465 --rc genhtml_function_coverage=1 00:33:45.465 --rc genhtml_legend=1 00:33:45.465 --rc geninfo_all_blocks=1 00:33:45.465 --rc geninfo_unexecuted_blocks=1 00:33:45.465 00:33:45.465 ' 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.465 05:55:33 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:45.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2005489 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2005489 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2005489 ']' 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.466 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.466 [2024-11-27 05:55:33.302137] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:33:45.466 [2024-11-27 05:55:33.302188] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005489 ] 00:33:45.466 [2024-11-27 05:55:33.375055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:45.466 [2024-11-27 05:55:33.419602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.466 [2024-11-27 05:55:33.419604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:45.726 05:55:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:45.726 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:45.726 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:45.726 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:45.726 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:45.726 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:45.726 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:45.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:45.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:45.726 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:45.726 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:45.726 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:45.726 ' 00:33:48.262 [2024-11-27 05:55:36.224544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.638 [2024-11-27 05:55:37.565026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:52.172 [2024-11-27 05:55:40.048630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:54.707 [2024-11-27 05:55:42.191367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:56.085 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:56.085 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:56.085 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:56.085 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:56.085 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:56.085 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:56.085 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:56.085 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:56.085 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:56.085 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:56.085 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:56.085 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:56.085 05:55:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.654 05:55:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:56.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:56.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:56.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:56.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:56.654 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:56.654 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:56.654 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:56.654 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:56.654 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:56.654 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:56.654 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:56.654 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:56.654 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:56.654 ' 00:34:03.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:03.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:03.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:03.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:03.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:03.225 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:03.225 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:03.225 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:03.225 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:03.225 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:03.225 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:03.225 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:03.225 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:03.225 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:03.225 05:55:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:03.225 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2005489 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2005489 ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2005489 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2005489 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2005489' 00:34:03.226 killing process with pid 2005489 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2005489 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2005489 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2005489 ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2005489 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2005489 ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2005489 00:34:03.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2005489) - No such process 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2005489 is not found' 00:34:03.226 Process with pid 2005489 is not found 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:03.226 00:34:03.226 real 0m17.281s 00:34:03.226 user 0m38.085s 00:34:03.226 sys 0m0.780s 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.226 05:55:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:03.226 ************************************ 00:34:03.226 END TEST spdkcli_nvmf_tcp 00:34:03.226 ************************************ 00:34:03.226 05:55:50 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:03.226 05:55:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:03.226 05:55:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:03.226 05:55:50 -- common/autotest_common.sh@10 -- # set +x 00:34:03.226 ************************************ 00:34:03.226 START TEST nvmf_identify_passthru 00:34:03.226 ************************************ 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:03.226 * Looking for test storage... 00:34:03.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.226 05:55:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:03.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.226 --rc genhtml_branch_coverage=1 00:34:03.226 --rc genhtml_function_coverage=1 00:34:03.226 --rc genhtml_legend=1 00:34:03.226 --rc geninfo_all_blocks=1 00:34:03.226 --rc geninfo_unexecuted_blocks=1 00:34:03.226 00:34:03.226 ' 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:03.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.226 --rc genhtml_branch_coverage=1 00:34:03.226 --rc genhtml_function_coverage=1 00:34:03.226 --rc genhtml_legend=1 00:34:03.226 --rc geninfo_all_blocks=1 00:34:03.226 --rc geninfo_unexecuted_blocks=1 00:34:03.226 00:34:03.226 ' 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:03.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.226 --rc genhtml_branch_coverage=1 00:34:03.226 --rc genhtml_function_coverage=1 00:34:03.226 --rc genhtml_legend=1 00:34:03.226 --rc geninfo_all_blocks=1 00:34:03.226 --rc geninfo_unexecuted_blocks=1 00:34:03.226 00:34:03.226 ' 00:34:03.226 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:03.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.226 --rc genhtml_branch_coverage=1 00:34:03.226 --rc genhtml_function_coverage=1 00:34:03.226 --rc genhtml_legend=1 00:34:03.226 --rc geninfo_all_blocks=1 00:34:03.226 --rc geninfo_unexecuted_blocks=1 00:34:03.226 00:34:03.226 ' 00:34:03.226 05:55:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.226 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:03.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:03.227 05:55:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.227 05:55:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:03.227 05:55:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.227 05:55:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.227 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:03.227 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:03.227 05:55:50 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:03.227 05:55:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.509 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:08.509 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:08.510 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:08.510 Found net devices under 0000:86:00.0: cvl_0_0 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:08.510 Found net devices under 0000:86:00.1: cvl_0_1 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:08.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:34:08.510 00:34:08.510 --- 10.0.0.2 ping statistics --- 00:34:08.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.510 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:34:08.510 00:34:08.510 --- 10.0.0.1 ping statistics --- 00:34:08.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.510 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:08.510 05:55:56 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:08.510 05:55:56 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:08.510 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.510 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:08.770 05:55:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:08.770 05:55:56 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:08.770 05:55:56 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:08.770 05:55:56 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:08.770 05:55:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:08.770 05:55:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:08.770 05:55:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:14.051 05:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:34:14.051 05:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:14.051 05:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:14.051 05:56:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:18.241 05:56:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:18.241 05:56:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:18.241 05:56:05 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:18.241 05:56:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.241 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.241 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2012816 00:34:18.241 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:18.241 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:18.241 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2012816 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2012816 ']' 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:18.241 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.241 [2024-11-27 05:56:06.065640] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:34:18.241 [2024-11-27 05:56:06.065697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.241 [2024-11-27 05:56:06.129039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.241 [2024-11-27 05:56:06.172891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.241 [2024-11-27 05:56:06.172928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.241 [2024-11-27 05:56:06.172937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.241 [2024-11-27 05:56:06.172943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.241 [2024-11-27 05:56:06.172948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.241 [2024-11-27 05:56:06.174470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.241 [2024-11-27 05:56:06.174510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:18.241 [2024-11-27 05:56:06.174619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.241 [2024-11-27 05:56:06.174619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:18.498 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.498 INFO: Log level set to 20 00:34:18.498 INFO: Requests: 00:34:18.498 { 00:34:18.498 "jsonrpc": "2.0", 00:34:18.498 "method": "nvmf_set_config", 00:34:18.498 "id": 1, 00:34:18.498 "params": { 00:34:18.498 "admin_cmd_passthru": { 00:34:18.498 "identify_ctrlr": true 00:34:18.498 } 00:34:18.498 } 00:34:18.498 } 00:34:18.498 00:34:18.498 INFO: response: 00:34:18.498 { 00:34:18.498 "jsonrpc": "2.0", 00:34:18.498 "id": 1, 00:34:18.498 "result": true 00:34:18.498 } 00:34:18.498 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.498 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.498 INFO: Setting log level to 20 00:34:18.498 INFO: Setting log level to 20 00:34:18.498 INFO: Log level set to 20 00:34:18.498 INFO: Log level set to 20 00:34:18.498 INFO: Requests: 00:34:18.498 { 00:34:18.498 "jsonrpc": "2.0", 00:34:18.498 "method": "framework_start_init", 00:34:18.498 "id": 1 00:34:18.498 } 00:34:18.498 00:34:18.498 INFO: Requests: 00:34:18.498 { 00:34:18.498 "jsonrpc": "2.0", 00:34:18.498 "method": "framework_start_init", 00:34:18.498 "id": 1 00:34:18.498 } 00:34:18.498 00:34:18.498 [2024-11-27 05:56:06.341907] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:18.498 INFO: response: 00:34:18.498 { 00:34:18.498 "jsonrpc": "2.0", 00:34:18.498 "id": 1, 00:34:18.498 "result": true 00:34:18.498 } 00:34:18.498 00:34:18.498 INFO: response: 00:34:18.498 { 00:34:18.498 "jsonrpc": "2.0", 00:34:18.498 "id": 1, 00:34:18.498 "result": true 00:34:18.498 } 00:34:18.498 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.498 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.498 INFO: Setting log level to 40 00:34:18.498 INFO: Setting log level to 40 00:34:18.498 INFO: Setting log level to 40 00:34:18.498 [2024-11-27 05:56:06.355210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.498 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.498 05:56:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.498 05:56:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.780 Nvme0n1 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.780 [2024-11-27 05:56:09.261203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.780 [ 00:34:21.780 { 00:34:21.780 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:21.780 "subtype": "Discovery", 00:34:21.780 "listen_addresses": [], 00:34:21.780 "allow_any_host": true, 00:34:21.780 "hosts": [] 00:34:21.780 }, 00:34:21.780 { 00:34:21.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.780 "subtype": "NVMe", 00:34:21.780 "listen_addresses": [ 00:34:21.780 { 00:34:21.780 "trtype": "TCP", 00:34:21.780 "adrfam": "IPv4", 00:34:21.780 "traddr": "10.0.0.2", 00:34:21.780 "trsvcid": "4420" 00:34:21.780 } 00:34:21.780 ], 00:34:21.780 "allow_any_host": true, 00:34:21.780 "hosts": [], 00:34:21.780 "serial_number": "SPDK00000000000001", 00:34:21.780 "model_number": "SPDK bdev Controller", 00:34:21.780 "max_namespaces": 1, 00:34:21.780 "min_cntlid": 1, 00:34:21.780 "max_cntlid": 65519, 00:34:21.780 "namespaces": [ 00:34:21.780 { 00:34:21.780 "nsid": 1, 00:34:21.780 "bdev_name": "Nvme0n1", 00:34:21.780 "name": "Nvme0n1", 00:34:21.780 "nguid": "6B6A5B8070174ED2AA33832BB8AA8F4B", 00:34:21.780 "uuid": "6b6a5b80-7017-4ed2-aa33-832bb8aa8f4b" 00:34:21.780 } 00:34:21.780 ] 00:34:21.780 } 00:34:21.780 ] 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.780 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:21.780 05:56:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:21.780 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:21.780 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:21.780 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.780 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:21.781 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.781 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.781 rmmod nvme_tcp 00:34:21.781 rmmod nvme_fabrics 00:34:21.781 rmmod nvme_keyring 00:34:21.781 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:21.781 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:21.781 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:21.781 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2012816 ']' 00:34:21.781 05:56:09 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2012816 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2012816 ']' 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2012816 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2012816 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2012816' 00:34:21.781 killing process with pid 2012816 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2012816 00:34:21.781 05:56:09 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2012816 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.317 05:56:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.317 05:56:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:24.317 05:56:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.223 05:56:13 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.223 00:34:26.223 real 0m23.498s 00:34:26.223 user 0m30.185s 00:34:26.223 sys 0m6.305s 00:34:26.223 05:56:13 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.223 05:56:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:26.223 ************************************ 00:34:26.223 END TEST nvmf_identify_passthru 00:34:26.223 ************************************ 00:34:26.223 05:56:13 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:26.223 05:56:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:26.223 05:56:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.223 05:56:13 -- common/autotest_common.sh@10 -- # set +x 00:34:26.223 ************************************ 00:34:26.223 START TEST nvmf_dif 00:34:26.223 ************************************ 00:34:26.223 05:56:13 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:26.223 * Looking for test storage... 00:34:26.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:26.223 05:56:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:26.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.223 --rc genhtml_branch_coverage=1 00:34:26.223 --rc genhtml_function_coverage=1 00:34:26.223 --rc genhtml_legend=1 00:34:26.223 --rc geninfo_all_blocks=1 00:34:26.223 --rc geninfo_unexecuted_blocks=1 00:34:26.223 00:34:26.223 ' 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:26.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.223 --rc genhtml_branch_coverage=1 00:34:26.223 --rc genhtml_function_coverage=1 00:34:26.223 --rc genhtml_legend=1 00:34:26.223 --rc geninfo_all_blocks=1 00:34:26.223 --rc geninfo_unexecuted_blocks=1 00:34:26.223 00:34:26.223 ' 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:26.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.223 --rc genhtml_branch_coverage=1 00:34:26.223 --rc genhtml_function_coverage=1 00:34:26.223 --rc genhtml_legend=1 00:34:26.223 --rc geninfo_all_blocks=1 00:34:26.223 --rc geninfo_unexecuted_blocks=1 00:34:26.223 00:34:26.223 ' 00:34:26.223 05:56:14 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:26.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.223 --rc genhtml_branch_coverage=1 00:34:26.223 --rc genhtml_function_coverage=1 00:34:26.223 --rc genhtml_legend=1 00:34:26.223 --rc geninfo_all_blocks=1 00:34:26.223 --rc geninfo_unexecuted_blocks=1 00:34:26.223 00:34:26.223 ' 00:34:26.223 05:56:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.223 05:56:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:26.223 05:56:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.223 05:56:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.223 05:56:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.224 05:56:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:26.224 05:56:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.224 05:56:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.224 05:56:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.224 05:56:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.224 05:56:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.224 05:56:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.224 05:56:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:26.224 05:56:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:26.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:26.224 05:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:26.224 05:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:26.224 05:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:26.224 05:56:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:26.224 05:56:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.224 05:56:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:26.224 05:56:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:26.224 05:56:14 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:26.224 05:56:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.796 05:56:19 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:32.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:32.797 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:32.797 Found net devices under 0000:86:00.0: cvl_0_0 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:32.797 Found net devices under 0000:86:00.1: cvl_0_1 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.797 05:56:19 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:34:32.797 00:34:32.797 --- 10.0.0.2 ping statistics --- 00:34:32.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.797 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:34:32.797 00:34:32.797 --- 10.0.0.1 ping statistics --- 00:34:32.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.797 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:32.797 05:56:20 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:34.702 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:34.702 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:34.702 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:34.702 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:34.702 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:34.702 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:34.961 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.961 05:56:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:34.961 05:56:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:34.961 05:56:22 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:34.961 05:56:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2018506 00:34:34.961 05:56:22 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2018506 00:34:34.962 05:56:22 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:34.962 05:56:22 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2018506 ']' 00:34:34.962 05:56:22 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.962 05:56:22 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:34.962 05:56:22 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.962 05:56:22 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:34.962 05:56:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:35.220 [2024-11-27 05:56:22.974993] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:34:35.220 [2024-11-27 05:56:22.975039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.220 [2024-11-27 05:56:23.054714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.220 [2024-11-27 05:56:23.095337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.220 [2024-11-27 05:56:23.095373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.220 [2024-11-27 05:56:23.095380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.220 [2024-11-27 05:56:23.095389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.220 [2024-11-27 05:56:23.095394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.220 [2024-11-27 05:56:23.095964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:36.158 05:56:23 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.158 05:56:23 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.158 05:56:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:36.158 05:56:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.158 [2024-11-27 05:56:23.842336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.158 05:56:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.158 05:56:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.158 ************************************ 00:34:36.158 START TEST fio_dif_1_default 00:34:36.158 ************************************ 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.159 bdev_null0 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:36.159 [2024-11-27 05:56:23.910640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:36.159 { 00:34:36.159 "params": { 00:34:36.159 "name": "Nvme$subsystem", 00:34:36.159 "trtype": "$TEST_TRANSPORT", 00:34:36.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.159 "adrfam": "ipv4", 00:34:36.159 "trsvcid": "$NVMF_PORT", 00:34:36.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.159 "hdgst": ${hdgst:-false}, 00:34:36.159 "ddgst": ${ddgst:-false} 00:34:36.159 }, 00:34:36.159 "method": "bdev_nvme_attach_controller" 00:34:36.159 } 00:34:36.159 EOF 00:34:36.159 )") 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:36.159 "params": { 00:34:36.159 "name": "Nvme0", 00:34:36.159 "trtype": "tcp", 00:34:36.159 "traddr": "10.0.0.2", 00:34:36.159 "adrfam": "ipv4", 00:34:36.159 "trsvcid": "4420", 00:34:36.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.159 "hdgst": false, 00:34:36.159 "ddgst": false 00:34:36.159 }, 00:34:36.159 "method": "bdev_nvme_attach_controller" 00:34:36.159 }' 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.159 05:56:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.418 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:36.418 fio-3.35 00:34:36.418 Starting 1 thread 00:34:48.623 00:34:48.623 filename0: (groupid=0, jobs=1): err= 0: pid=2018885: Wed Nov 27 05:56:34 2024 00:34:48.623 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10040msec) 00:34:48.623 slat (nsec): min=6012, max=33999, avg=6424.98, stdev=1270.55 00:34:48.623 clat (usec): min=441, max=45929, avg=41129.19, stdev=2665.40 00:34:48.623 lat (usec): min=448, max=45963, avg=41135.62, stdev=2665.38 00:34:48.623 clat percentiles (usec): 00:34:48.623 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:48.623 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:48.623 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:48.623 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:34:48.623 | 99.99th=[45876] 00:34:48.623 bw ( KiB/s): min= 384, max= 416, per=99.78%, avg=388.80, stdev=11.72, samples=20 00:34:48.623 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:48.623 lat (usec) : 500=0.41% 00:34:48.624 lat (msec) : 50=99.59% 00:34:48.624 cpu : usr=92.63%, sys=7.10%, ctx=8, majf=0, minf=0 00:34:48.624 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.624 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.624 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:48.624 00:34:48.624 Run status group 0 (all jobs): 00:34:48.624 READ: bw=389KiB/s (398kB/s), 389KiB/s-389KiB/s (398kB/s-398kB/s), io=3904KiB (3998kB), run=10040-10040msec 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 00:34:48.624 real 0m11.243s 00:34:48.624 user 0m16.653s 00:34:48.624 sys 0m0.987s 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 ************************************ 00:34:48.624 END TEST fio_dif_1_default 00:34:48.624 ************************************ 00:34:48.624 05:56:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:48.624 05:56:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:48.624 05:56:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 ************************************ 00:34:48.624 START TEST fio_dif_1_multi_subsystems 00:34:48.624 ************************************ 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 bdev_null0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 [2024-11-27 05:56:35.234699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 bdev_null1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:48.624 { 00:34:48.624 "params": { 00:34:48.624 "name": "Nvme$subsystem", 00:34:48.624 "trtype": "$TEST_TRANSPORT", 00:34:48.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.624 "adrfam": "ipv4", 00:34:48.624 "trsvcid": "$NVMF_PORT", 00:34:48.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.624 "hdgst": ${hdgst:-false}, 00:34:48.624 "ddgst": ${ddgst:-false} 00:34:48.624 }, 00:34:48.624 "method": "bdev_nvme_attach_controller" 00:34:48.624 } 00:34:48.624 EOF 00:34:48.624 )") 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.624 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:48.625 { 00:34:48.625 "params": { 00:34:48.625 "name": "Nvme$subsystem", 00:34:48.625 "trtype": "$TEST_TRANSPORT", 00:34:48.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.625 "adrfam": "ipv4", 00:34:48.625 "trsvcid": "$NVMF_PORT", 00:34:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.625 "hdgst": ${hdgst:-false}, 00:34:48.625 "ddgst": ${ddgst:-false} 00:34:48.625 }, 00:34:48.625 "method": "bdev_nvme_attach_controller" 00:34:48.625 } 00:34:48.625 EOF 00:34:48.625 )") 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:48.625 "params": { 00:34:48.625 "name": "Nvme0", 00:34:48.625 "trtype": "tcp", 00:34:48.625 "traddr": "10.0.0.2", 00:34:48.625 "adrfam": "ipv4", 00:34:48.625 "trsvcid": "4420", 00:34:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:48.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:48.625 "hdgst": false, 00:34:48.625 "ddgst": false 00:34:48.625 }, 00:34:48.625 "method": "bdev_nvme_attach_controller" 00:34:48.625 },{ 00:34:48.625 "params": { 00:34:48.625 "name": "Nvme1", 00:34:48.625 "trtype": "tcp", 00:34:48.625 "traddr": "10.0.0.2", 00:34:48.625 "adrfam": "ipv4", 00:34:48.625 "trsvcid": "4420", 00:34:48.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:48.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:48.625 "hdgst": false, 00:34:48.625 "ddgst": false 00:34:48.625 }, 00:34:48.625 "method": "bdev_nvme_attach_controller" 00:34:48.625 }' 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:48.625 05:56:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.625 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:48.625 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:48.625 fio-3.35 00:34:48.625 Starting 2 threads 00:34:58.605 00:34:58.605 filename0: (groupid=0, jobs=1): err= 0: pid=2020853: Wed Nov 27 05:56:46 2024 00:34:58.605 read: IOPS=213, BW=853KiB/s (874kB/s)(8544KiB/10015msec) 00:34:58.605 slat (nsec): min=5950, max=42836, avg=7184.52, stdev=2288.90 00:34:58.605 clat (usec): min=360, max=42421, avg=18732.46, stdev=20179.97 00:34:58.605 lat (usec): min=367, max=42429, avg=18739.64, stdev=20179.44 00:34:58.605 clat percentiles (usec): 00:34:58.605 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 416], 00:34:58.605 | 30.00th=[ 429], 40.00th=[ 445], 50.00th=[ 545], 60.00th=[40633], 00:34:58.605 | 70.00th=[40633], 80.00th=[40633], 90.00th=[41681], 95.00th=[41681], 00:34:58.605 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:58.605 | 99.99th=[42206] 00:34:58.605 bw ( KiB/s): min= 736, max= 1280, per=52.54%, avg=852.80, stdev=137.84, samples=20 00:34:58.605 iops : min= 184, max= 320, avg=213.20, stdev=34.46, samples=20 00:34:58.605 lat (usec) : 500=49.77%, 750=4.96%, 1000=0.14% 00:34:58.605 lat (msec) : 50=45.13% 00:34:58.605 cpu : usr=96.72%, sys=2.90%, ctx=36, majf=0, minf=0 00:34:58.605 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.605 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.605 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:58.605 filename1: (groupid=0, jobs=1): err= 0: pid=2020854: Wed Nov 27 05:56:46 2024 00:34:58.605 read: IOPS=192, BW=768KiB/s (787kB/s)(7696KiB/10015msec) 00:34:58.605 slat (nsec): min=6064, max=41503, avg=7257.11, stdev=2404.73 00:34:58.605 clat (usec): min=374, max=42520, avg=20798.77, stdev=20332.29 00:34:58.605 lat (usec): min=380, max=42528, avg=20806.03, stdev=20331.63 00:34:58.605 clat percentiles (usec): 00:34:58.605 | 1.00th=[ 379], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 408], 00:34:58.605 | 30.00th=[ 420], 40.00th=[ 437], 50.00th=[40633], 60.00th=[40633], 00:34:58.605 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:58.605 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:58.605 | 99.99th=[42730] 00:34:58.605 bw ( KiB/s): min= 768, max= 768, per=47.36%, avg=768.00, stdev= 0.00, samples=20 00:34:58.605 iops : min= 192, max= 192, avg=192.00, stdev= 0.00, samples=20 00:34:58.605 lat (usec) : 500=44.44%, 750=5.46% 00:34:58.605 lat (msec) : 50=50.10% 00:34:58.605 cpu : usr=96.78%, sys=2.93%, ctx=18, majf=0, minf=0 00:34:58.605 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.605 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.605 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:58.605 00:34:58.605 Run status group 0 (all jobs): 00:34:58.605 READ: bw=1622KiB/s (1660kB/s), 768KiB/s-853KiB/s (787kB/s-874kB/s), io=15.9MiB (16.6MB), run=10015-10015msec 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 00:34:58.865 real 0m11.501s 00:34:58.865 user 0m26.246s 00:34:58.865 sys 0m0.891s 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 ************************************ 00:34:58.865 END TEST fio_dif_1_multi_subsystems 00:34:58.865 ************************************ 00:34:58.865 05:56:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:58.865 05:56:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:58.865 05:56:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 ************************************ 00:34:58.865 START TEST fio_dif_rand_params 00:34:58.865 ************************************ 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 bdev_null0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.865 [2024-11-27 05:56:46.810127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.865 { 00:34:58.865 "params": { 00:34:58.865 "name": "Nvme$subsystem", 00:34:58.865 "trtype": "$TEST_TRANSPORT", 00:34:58.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.865 "adrfam": "ipv4", 00:34:58.865 "trsvcid": "$NVMF_PORT", 00:34:58.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.865 "hdgst": ${hdgst:-false}, 00:34:58.865 "ddgst": ${ddgst:-false} 00:34:58.865 }, 00:34:58.865 "method": "bdev_nvme_attach_controller" 00:34:58.865 } 00:34:58.865 EOF 00:34:58.865 )") 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:58.865 05:56:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:58.865 "params": { 00:34:58.865 "name": "Nvme0", 00:34:58.865 "trtype": "tcp", 00:34:58.865 "traddr": "10.0.0.2", 00:34:58.865 "adrfam": "ipv4", 00:34:58.865 "trsvcid": "4420", 00:34:58.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.865 "hdgst": false, 00:34:58.865 "ddgst": false 00:34:58.866 }, 00:34:58.866 "method": "bdev_nvme_attach_controller" 00:34:58.866 }' 00:34:58.866 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.866 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.866 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.866 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:58.866 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.866 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:59.216 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:59.216 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:59.216 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:59.216 05:56:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.216 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:59.216 ... 00:34:59.216 fio-3.35 00:34:59.216 Starting 3 threads 00:35:05.945 00:35:05.945 filename0: (groupid=0, jobs=1): err= 0: pid=2022820: Wed Nov 27 05:56:52 2024 00:35:05.945 read: IOPS=328, BW=41.0MiB/s (43.0MB/s)(206MiB/5025msec) 00:35:05.945 slat (nsec): min=6177, max=37304, avg=10547.42, stdev=2261.69 00:35:05.945 clat (usec): min=3325, max=51015, avg=9122.05, stdev=4847.10 00:35:05.945 lat (usec): min=3332, max=51027, avg=9132.60, stdev=4847.08 00:35:05.945 clat percentiles (usec): 00:35:05.945 | 1.00th=[ 3851], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7701], 00:35:05.945 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8979], 00:35:05.945 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[10814], 00:35:05.945 | 99.00th=[46924], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:35:05.945 | 99.99th=[51119] 00:35:05.945 bw ( KiB/s): min=38144, max=45568, per=36.24%, avg=42163.20, stdev=2571.49, samples=10 00:35:05.945 iops : min= 298, max= 356, avg=329.40, stdev=20.09, samples=10 00:35:05.945 lat (msec) : 4=1.58%, 10=84.73%, 20=12.24%, 50=1.33%, 100=0.12% 00:35:05.945 cpu : usr=93.73%, sys=5.95%, ctx=14, majf=0, minf=56 00:35:05.945 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.945 issued rwts: total=1650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.945 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.945 filename0: (groupid=0, jobs=1): err= 0: pid=2022821: Wed Nov 27 05:56:52 2024 00:35:05.945 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(188MiB/5031msec) 00:35:05.945 slat (nsec): min=6195, max=61843, avg=10967.15, stdev=2309.90 00:35:05.945 clat (usec): min=3757, max=50280, avg=10013.37, stdev=5024.78 00:35:05.945 lat (usec): min=3763, max=50292, avg=10024.34, stdev=5024.82 00:35:05.945 clat percentiles (usec): 00:35:05.945 | 1.00th=[ 4883], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 8291], 00:35:05.945 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:35:05.945 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11600], 95.00th=[12125], 00:35:05.945 | 99.00th=[46400], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:35:05.945 | 99.99th=[50070] 00:35:05.945 bw ( KiB/s): min=27392, max=41984, per=33.05%, avg=38451.20, stdev=4410.67, samples=10 00:35:05.945 iops : min= 214, max= 328, avg=300.40, stdev=34.46, samples=10 00:35:05.945 lat (msec) : 4=0.86%, 10=59.67%, 20=37.87%, 50=1.53%, 100=0.07% 00:35:05.945 cpu : usr=94.89%, sys=4.79%, ctx=11, majf=0, minf=90 00:35:05.945 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.945 issued rwts: total=1505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.945 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.945 filename0: (groupid=0, jobs=1): err= 0: pid=2022822: Wed Nov 27 05:56:52 2024 00:35:05.945 read: IOPS=283, BW=35.4MiB/s (37.2MB/s)(179MiB/5044msec) 00:35:05.945 slat (nsec): min=6158, max=31168, avg=10833.93, stdev=2091.56 00:35:05.945 clat (usec): min=3356, max=52433, avg=10539.67, stdev=5935.30 00:35:05.945 lat (usec): min=3363, max=52444, avg=10550.51, stdev=5935.09 00:35:05.945 clat percentiles (usec): 00:35:05.945 | 1.00th=[ 3851], 5.00th=[ 6521], 10.00th=[ 7701], 20.00th=[ 8586], 00:35:05.945 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:35:05.945 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11731], 95.00th=[12256], 00:35:05.945 | 99.00th=[48497], 99.50th=[50070], 99.90th=[51643], 99.95th=[52691], 00:35:05.945 | 99.99th=[52691] 00:35:05.945 bw ( KiB/s): min=28416, max=39680, per=31.42%, avg=36556.80, stdev=3408.64, samples=10 00:35:05.945 iops : min= 222, max= 310, avg=285.60, stdev=26.63, samples=10 00:35:05.945 lat (msec) : 4=1.68%, 10=50.98%, 20=45.10%, 50=1.82%, 100=0.42% 00:35:05.945 cpu : usr=94.69%, sys=5.02%, ctx=5, majf=0, minf=23 00:35:05.945 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.945 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.945 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.945 issued rwts: total=1430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.945 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:05.945 00:35:05.945 Run status group 0 (all jobs): 00:35:05.945 READ: bw=114MiB/s (119MB/s), 35.4MiB/s-41.0MiB/s (37.2MB/s-43.0MB/s), io=573MiB (601MB), run=5025-5044msec 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.945 bdev_null0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.945 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 [2024-11-27 05:56:53.242372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 bdev_null1 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 bdev_null2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.946 { 00:35:05.946 "params": { 00:35:05.946 "name": "Nvme$subsystem", 00:35:05.946 "trtype": "$TEST_TRANSPORT", 00:35:05.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.946 "adrfam": "ipv4", 00:35:05.946 "trsvcid": "$NVMF_PORT", 00:35:05.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.946 "hdgst": ${hdgst:-false}, 00:35:05.946 "ddgst": ${ddgst:-false} 00:35:05.946 }, 00:35:05.946 "method": "bdev_nvme_attach_controller" 00:35:05.946 } 00:35:05.946 EOF 00:35:05.946 )") 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.946 { 00:35:05.946 "params": { 00:35:05.946 "name": "Nvme$subsystem", 00:35:05.946 "trtype": "$TEST_TRANSPORT", 00:35:05.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.946 "adrfam": "ipv4", 00:35:05.946 "trsvcid": "$NVMF_PORT", 00:35:05.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.946 "hdgst": ${hdgst:-false}, 00:35:05.946 "ddgst": ${ddgst:-false} 00:35:05.946 }, 00:35:05.946 "method": "bdev_nvme_attach_controller" 00:35:05.946 } 00:35:05.946 EOF 00:35:05.946 )") 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:05.946 { 00:35:05.946 "params": { 00:35:05.946 "name": "Nvme$subsystem", 00:35:05.946 "trtype": "$TEST_TRANSPORT", 00:35:05.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.946 "adrfam": "ipv4", 00:35:05.946 "trsvcid": "$NVMF_PORT", 00:35:05.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.946 "hdgst": ${hdgst:-false}, 00:35:05.946 "ddgst": ${ddgst:-false} 00:35:05.946 }, 00:35:05.946 "method": "bdev_nvme_attach_controller" 00:35:05.946 } 00:35:05.946 EOF 00:35:05.946 )") 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:05.946 05:56:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:05.946 "params": { 00:35:05.946 "name": "Nvme0", 00:35:05.946 "trtype": "tcp", 00:35:05.946 "traddr": "10.0.0.2", 00:35:05.946 "adrfam": "ipv4", 00:35:05.946 "trsvcid": "4420", 00:35:05.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.946 "hdgst": false, 00:35:05.946 "ddgst": false 00:35:05.946 }, 00:35:05.946 "method": "bdev_nvme_attach_controller" 00:35:05.946 },{ 00:35:05.946 "params": { 00:35:05.946 "name": "Nvme1", 00:35:05.946 "trtype": "tcp", 00:35:05.946 "traddr": "10.0.0.2", 00:35:05.946 "adrfam": "ipv4", 00:35:05.946 "trsvcid": "4420", 00:35:05.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.946 "hdgst": false, 00:35:05.946 "ddgst": false 00:35:05.946 }, 00:35:05.946 "method": "bdev_nvme_attach_controller" 00:35:05.946 },{ 00:35:05.946 "params": { 00:35:05.946 "name": "Nvme2", 00:35:05.946 "trtype": "tcp", 00:35:05.946 "traddr": "10.0.0.2", 00:35:05.946 "adrfam": "ipv4", 00:35:05.946 "trsvcid": "4420", 00:35:05.946 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:05.947 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:05.947 "hdgst": false, 00:35:05.947 "ddgst": false 00:35:05.947 }, 00:35:05.947 "method": "bdev_nvme_attach_controller" 00:35:05.947 }' 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:05.947 05:56:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.947 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:05.947 ... 00:35:05.947 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:05.947 ... 00:35:05.947 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:05.947 ... 00:35:05.947 fio-3.35 00:35:05.947 Starting 24 threads 00:35:18.156 00:35:18.156 filename0: (groupid=0, jobs=1): err= 0: pid=2023990: Wed Nov 27 05:57:04 2024 00:35:18.156 read: IOPS=535, BW=2144KiB/s (2195kB/s)(20.9MiB/10006msec) 00:35:18.156 slat (nsec): min=7107, max=92854, avg=24213.08, stdev=19801.54 00:35:18.156 clat (usec): min=11705, max=50482, avg=29651.54, stdev=3935.61 00:35:18.156 lat (usec): min=11718, max=50505, avg=29675.76, stdev=3937.62 00:35:18.156 clat percentiles (usec): 00:35:18.156 | 1.00th=[17695], 5.00th=[19268], 10.00th=[28443], 20.00th=[28705], 00:35:18.156 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30278], 60.00th=[30540], 00:35:18.156 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31851], 00:35:18.156 | 99.00th=[42206], 99.50th=[42206], 99.90th=[50070], 99.95th=[50594], 00:35:18.156 | 99.99th=[50594] 00:35:18.156 bw ( KiB/s): min= 1968, max= 2304, per=4.17%, avg=2136.42, stdev=100.68, samples=19 00:35:18.156 iops : min= 492, max= 576, avg=534.11, stdev=25.17, samples=19 00:35:18.156 lat (msec) : 20=5.93%, 50=93.88%, 100=0.19% 00:35:18.156 cpu : usr=98.64%, sys=0.97%, ctx=14, majf=0, minf=34 00:35:18.156 IO depths : 1=4.1%, 2=9.8%, 4=22.8%, 8=54.9%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:18.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.156 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.156 issued rwts: total=5362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.156 filename0: (groupid=0, jobs=1): err= 0: pid=2023991: Wed Nov 27 05:57:04 2024 00:35:18.156 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:35:18.156 slat (nsec): min=5288, max=97130, avg=41221.06, stdev=16282.06 00:35:18.156 clat (usec): min=12284, max=49086, avg=29683.13, stdev=1843.36 00:35:18.156 lat (usec): min=12330, max=49103, avg=29724.35, stdev=1837.47 00:35:18.156 clat percentiles (usec): 00:35:18.156 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.156 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:18.156 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.156 | 99.00th=[31327], 99.50th=[31589], 99.90th=[49021], 99.95th=[49021], 00:35:18.156 | 99.99th=[49021] 00:35:18.156 bw ( KiB/s): min= 1916, max= 2304, per=4.13%, avg=2114.89, stdev=98.78, samples=19 00:35:18.156 iops : min= 479, max= 576, avg=528.68, stdev=24.62, samples=19 00:35:18.156 lat (msec) : 20=0.60%, 50=99.40% 00:35:18.156 cpu : usr=98.69%, sys=0.84%, ctx=25, majf=0, minf=26 00:35:18.156 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.156 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.156 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.156 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.156 filename0: (groupid=0, jobs=1): err= 0: pid=2023992: Wed Nov 27 05:57:04 2024 00:35:18.156 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10009msec) 00:35:18.156 slat (nsec): min=7438, max=99090, avg=29673.12, stdev=19939.92 00:35:18.156 clat (usec): min=2353, max=44708, avg=29380.82, stdev=3402.22 00:35:18.156 lat (usec): min=2370, max=44731, avg=29410.50, stdev=3399.54 00:35:18.156 clat percentiles (usec): 00:35:18.156 | 1.00th=[ 9372], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.156 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:18.156 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:35:18.156 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32637], 99.95th=[41157], 00:35:18.156 | 99.99th=[44827] 00:35:18.156 bw ( KiB/s): min= 2048, max= 2693, per=4.21%, avg=2157.05, stdev=152.23, samples=20 00:35:18.156 iops : min= 512, max= 673, avg=539.25, stdev=38.01, samples=20 00:35:18.157 lat (msec) : 4=0.72%, 10=0.46%, 20=1.26%, 50=97.56% 00:35:18.157 cpu : usr=98.64%, sys=0.96%, ctx=21, majf=0, minf=43 00:35:18.157 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename0: (groupid=0, jobs=1): err= 0: pid=2023993: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10019msec) 00:35:18.157 slat (nsec): min=7484, max=91683, avg=44016.91, stdev=16660.90 00:35:18.157 clat (usec): min=9651, max=31921, avg=29547.47, stdev=1886.18 00:35:18.157 lat (usec): min=9673, max=31940, avg=29591.48, stdev=1885.51 00:35:18.157 clat percentiles (usec): 00:35:18.157 | 1.00th=[18220], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.157 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:18.157 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.157 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:18.157 | 99.99th=[31851] 00:35:18.157 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2137.60, stdev=93.78, samples=20 00:35:18.157 iops : min= 512, max= 576, avg=534.40, stdev=23.45, samples=20 00:35:18.157 lat (msec) : 10=0.13%, 20=0.90%, 50=98.97% 00:35:18.157 cpu : usr=98.76%, sys=0.83%, ctx=18, majf=0, minf=49 00:35:18.157 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename0: (groupid=0, jobs=1): err= 0: pid=2023994: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10019msec) 00:35:18.157 slat (nsec): min=11572, max=94494, avg=45847.02, stdev=15398.61 00:35:18.157 clat (usec): min=9164, max=31923, avg=29524.96, stdev=1914.34 00:35:18.157 lat (usec): min=9176, max=31941, avg=29570.81, stdev=1913.97 00:35:18.157 clat percentiles (usec): 00:35:18.157 | 1.00th=[21103], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.157 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:18.157 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.157 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:18.157 | 99.99th=[31851] 00:35:18.157 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2137.60, stdev=93.78, samples=20 00:35:18.157 iops : min= 512, max= 576, avg=534.40, stdev=23.45, samples=20 00:35:18.157 lat (msec) : 10=0.30%, 20=0.60%, 50=99.10% 00:35:18.157 cpu : usr=98.73%, sys=0.87%, ctx=14, majf=0, minf=18 00:35:18.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename0: (groupid=0, jobs=1): err= 0: pid=2023995: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:35:18.157 slat (nsec): min=7657, max=97238, avg=34681.86, stdev=18790.38 00:35:18.157 clat (usec): min=12692, max=77976, avg=29866.11, stdev=2827.84 00:35:18.157 lat (usec): min=12763, max=78037, avg=29900.79, stdev=2823.92 00:35:18.157 clat percentiles (usec): 00:35:18.157 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.157 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:18.157 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:35:18.157 | 99.00th=[31589], 99.50th=[31851], 99.90th=[77071], 99.95th=[77071], 00:35:18.157 | 99.99th=[78119] 00:35:18.157 bw ( KiB/s): min= 1795, max= 2304, per=4.12%, avg=2108.53, stdev=114.90, samples=19 00:35:18.157 iops : min= 448, max= 576, avg=527.05, stdev=28.77, samples=19 00:35:18.157 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:35:18.157 cpu : usr=98.14%, sys=1.24%, ctx=68, majf=0, minf=34 00:35:18.157 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename0: (groupid=0, jobs=1): err= 0: pid=2023996: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10010msec) 00:35:18.157 slat (nsec): min=6314, max=88794, avg=36926.94, stdev=15789.87 00:35:18.157 clat (usec): min=18758, max=43355, avg=29766.65, stdev=1246.48 00:35:18.157 lat (usec): min=18801, max=43371, avg=29803.58, stdev=1238.68 00:35:18.157 clat percentiles (usec): 00:35:18.157 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.157 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:18.157 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:35:18.157 | 99.00th=[31327], 99.50th=[31851], 99.90th=[34866], 99.95th=[34866], 00:35:18.157 | 99.99th=[43254] 00:35:18.157 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2121.58, stdev=88.31, samples=19 00:35:18.157 iops : min= 512, max= 576, avg=530.32, stdev=22.03, samples=19 00:35:18.157 lat (msec) : 20=0.30%, 50=99.70% 00:35:18.157 cpu : usr=98.70%, sys=0.90%, ctx=12, majf=0, minf=22 00:35:18.157 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename0: (groupid=0, jobs=1): err= 0: pid=2023997: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:35:18.157 slat (nsec): min=7133, max=94757, avg=15321.66, stdev=7452.14 00:35:18.157 clat (usec): min=13136, max=65705, avg=29995.91, stdev=2723.28 00:35:18.157 lat (usec): min=13145, max=65749, avg=30011.23, stdev=2724.13 00:35:18.157 clat percentiles (usec): 00:35:18.157 | 1.00th=[24511], 5.00th=[28443], 10.00th=[28443], 20.00th=[28705], 00:35:18.157 | 30.00th=[28705], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:35:18.157 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:35:18.157 | 99.00th=[31851], 99.50th=[43254], 99.90th=[65274], 99.95th=[65799], 00:35:18.157 | 99.99th=[65799] 00:35:18.157 bw ( KiB/s): min= 1923, max= 2304, per=4.13%, avg=2115.26, stdev=107.22, samples=19 00:35:18.157 iops : min= 480, max= 576, avg=528.74, stdev=26.86, samples=19 00:35:18.157 lat (msec) : 20=0.96%, 50=98.74%, 100=0.30% 00:35:18.157 cpu : usr=98.70%, sys=0.90%, ctx=20, majf=0, minf=46 00:35:18.157 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename1: (groupid=0, jobs=1): err= 0: pid=2023998: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10019msec) 00:35:18.157 slat (nsec): min=8111, max=95023, avg=46192.35, stdev=15400.94 00:35:18.157 clat (usec): min=9238, max=31896, avg=29516.03, stdev=1892.25 00:35:18.157 lat (usec): min=9272, max=31958, avg=29562.22, stdev=1892.53 00:35:18.157 clat percentiles (usec): 00:35:18.157 | 1.00th=[21103], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.157 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:18.157 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.157 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:18.157 | 99.99th=[31851] 00:35:18.157 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2137.60, stdev=93.78, samples=20 00:35:18.157 iops : min= 512, max= 576, avg=534.40, stdev=23.45, samples=20 00:35:18.157 lat (msec) : 10=0.26%, 20=0.67%, 50=99.07% 00:35:18.157 cpu : usr=98.69%, sys=0.89%, ctx=14, majf=0, minf=26 00:35:18.157 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename1: (groupid=0, jobs=1): err= 0: pid=2023999: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:35:18.157 slat (usec): min=5, max=120, avg=46.69, stdev=17.35 00:35:18.157 clat (usec): min=10974, max=53193, avg=29622.13, stdev=2017.09 00:35:18.157 lat (usec): min=10981, max=53210, avg=29668.82, stdev=2016.77 00:35:18.157 clat percentiles (usec): 00:35:18.157 | 1.00th=[27132], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.157 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:18.157 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:18.157 | 99.00th=[31327], 99.50th=[31589], 99.90th=[53216], 99.95th=[53216], 00:35:18.157 | 99.99th=[53216] 00:35:18.157 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2115.11, stdev=98.33, samples=19 00:35:18.157 iops : min= 480, max= 576, avg=528.74, stdev=24.51, samples=19 00:35:18.157 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:35:18.157 cpu : usr=98.55%, sys=1.01%, ctx=38, majf=0, minf=43 00:35:18.157 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.157 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.157 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.157 filename1: (groupid=0, jobs=1): err= 0: pid=2024000: Wed Nov 27 05:57:04 2024 00:35:18.157 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10004msec) 00:35:18.157 slat (nsec): min=6995, max=95199, avg=18618.73, stdev=15320.41 00:35:18.158 clat (usec): min=17491, max=32260, avg=29895.32, stdev=1207.82 00:35:18.158 lat (usec): min=17500, max=32277, avg=29913.94, stdev=1203.34 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28443], 20.00th=[28705], 00:35:18.158 | 30.00th=[28967], 40.00th=[30278], 50.00th=[30540], 60.00th=[30540], 00:35:18.158 | 70.00th=[30540], 80.00th=[30802], 90.00th=[30802], 95.00th=[30802], 00:35:18.158 | 99.00th=[31065], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:35:18.158 | 99.99th=[32375] 00:35:18.158 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2122.11, stdev=88.64, samples=19 00:35:18.158 iops : min= 512, max= 576, avg=530.53, stdev=22.16, samples=19 00:35:18.158 lat (msec) : 20=0.30%, 50=99.70% 00:35:18.158 cpu : usr=98.70%, sys=0.91%, ctx=14, majf=0, minf=30 00:35:18.158 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.158 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.158 filename1: (groupid=0, jobs=1): err= 0: pid=2024001: Wed Nov 27 05:57:04 2024 00:35:18.158 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:35:18.158 slat (nsec): min=9215, max=94197, avg=40464.02, stdev=15344.02 00:35:18.158 clat (usec): min=12355, max=52887, avg=29687.96, stdev=1836.14 00:35:18.158 lat (usec): min=12399, max=52903, avg=29728.42, stdev=1831.44 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.158 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:18.158 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.158 | 99.00th=[31327], 99.50th=[31589], 99.90th=[48497], 99.95th=[48497], 00:35:18.158 | 99.99th=[52691] 00:35:18.158 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2115.11, stdev=98.33, samples=19 00:35:18.158 iops : min= 480, max= 576, avg=528.74, stdev=24.51, samples=19 00:35:18.158 lat (msec) : 20=0.60%, 50=99.36%, 100=0.04% 00:35:18.158 cpu : usr=98.71%, sys=0.89%, ctx=14, majf=0, minf=32 00:35:18.158 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.158 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.158 filename1: (groupid=0, jobs=1): err= 0: pid=2024002: Wed Nov 27 05:57:04 2024 00:35:18.158 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10012msec) 00:35:18.158 slat (nsec): min=4160, max=96182, avg=40285.18, stdev=16831.41 00:35:18.158 clat (usec): min=12015, max=51897, avg=29730.40, stdev=1955.84 00:35:18.158 lat (usec): min=12066, max=51910, avg=29770.69, stdev=1948.60 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.158 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:18.158 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.158 | 99.00th=[31327], 99.50th=[31851], 99.90th=[51643], 99.95th=[51643], 00:35:18.158 | 99.99th=[51643] 00:35:18.158 bw ( KiB/s): min= 1920, max= 2308, per=4.13%, avg=2115.79, stdev=99.42, samples=19 00:35:18.158 iops : min= 480, max= 577, avg=528.95, stdev=24.86, samples=19 00:35:18.158 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:35:18.158 cpu : usr=98.61%, sys=0.93%, ctx=29, majf=0, minf=31 00:35:18.158 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.158 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.158 filename1: (groupid=0, jobs=1): err= 0: pid=2024003: Wed Nov 27 05:57:04 2024 00:35:18.158 read: IOPS=531, BW=2127KiB/s (2178kB/s)(20.8MiB/10016msec) 00:35:18.158 slat (nsec): min=4081, max=41488, avg=14735.72, stdev=3772.48 00:35:18.158 clat (usec): min=16729, max=43556, avg=29951.53, stdev=1574.43 00:35:18.158 lat (usec): min=16737, max=43568, avg=29966.27, stdev=1573.82 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[28443], 5.00th=[28443], 10.00th=[28443], 20.00th=[28705], 00:35:18.158 | 30.00th=[28705], 40.00th=[30540], 50.00th=[30540], 60.00th=[30540], 00:35:18.158 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:35:18.158 | 99.00th=[31851], 99.50th=[32113], 99.90th=[43779], 99.95th=[43779], 00:35:18.158 | 99.99th=[43779] 00:35:18.158 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2114.84, stdev=98.17, samples=19 00:35:18.158 iops : min= 480, max= 576, avg=528.63, stdev=24.44, samples=19 00:35:18.158 lat (msec) : 20=0.45%, 50=99.55% 00:35:18.158 cpu : usr=98.68%, sys=0.91%, ctx=15, majf=0, minf=30 00:35:18.158 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:18.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 issued rwts: total=5326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.158 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.158 filename1: (groupid=0, jobs=1): err= 0: pid=2024004: Wed Nov 27 05:57:04 2024 00:35:18.158 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:35:18.158 slat (nsec): min=8598, max=98754, avg=40875.80, stdev=15505.98 00:35:18.158 clat (usec): min=12321, max=48596, avg=29685.13, stdev=1817.91 00:35:18.158 lat (usec): min=12365, max=48614, avg=29726.00, stdev=1813.26 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.158 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:18.158 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.158 | 99.00th=[31327], 99.50th=[31851], 99.90th=[48497], 99.95th=[48497], 00:35:18.158 | 99.99th=[48497] 00:35:18.158 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2115.11, stdev=98.33, samples=19 00:35:18.158 iops : min= 480, max= 576, avg=528.74, stdev=24.51, samples=19 00:35:18.158 lat (msec) : 20=0.60%, 50=99.40% 00:35:18.158 cpu : usr=98.58%, sys=1.02%, ctx=13, majf=0, minf=27 00:35:18.158 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.158 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.158 filename1: (groupid=0, jobs=1): err= 0: pid=2024005: Wed Nov 27 05:57:04 2024 00:35:18.158 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:35:18.158 slat (usec): min=5, max=105, avg=30.29, stdev=16.74 00:35:18.158 clat (usec): min=18852, max=38176, avg=29837.63, stdev=1234.34 00:35:18.158 lat (usec): min=18905, max=38195, avg=29867.92, stdev=1227.24 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28443], 00:35:18.158 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:35:18.158 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:35:18.158 | 99.00th=[31589], 99.50th=[32113], 99.90th=[35914], 99.95th=[36439], 00:35:18.158 | 99.99th=[38011] 00:35:18.158 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2124.80, stdev=87.11, samples=20 00:35:18.158 iops : min= 512, max= 576, avg=531.20, stdev=21.78, samples=20 00:35:18.158 lat (msec) : 20=0.30%, 50=99.70% 00:35:18.158 cpu : usr=98.69%, sys=0.91%, ctx=14, majf=0, minf=26 00:35:18.158 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.158 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.158 filename2: (groupid=0, jobs=1): err= 0: pid=2024006: Wed Nov 27 05:57:04 2024 00:35:18.158 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10018msec) 00:35:18.158 slat (nsec): min=5558, max=97416, avg=45224.90, stdev=15728.01 00:35:18.158 clat (usec): min=17674, max=31970, avg=29578.59, stdev=1350.72 00:35:18.158 lat (usec): min=17683, max=32012, avg=29623.81, stdev=1351.95 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[25297], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.158 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:18.158 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30540], 00:35:18.158 | 99.00th=[30802], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:18.158 | 99.99th=[31851] 00:35:18.158 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2130.00, stdev=93.53, samples=20 00:35:18.158 iops : min= 512, max= 576, avg=532.45, stdev=23.34, samples=20 00:35:18.158 lat (msec) : 20=0.60%, 50=99.40% 00:35:18.158 cpu : usr=98.56%, sys=1.05%, ctx=17, majf=0, minf=33 00:35:18.158 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.158 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.158 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.158 filename2: (groupid=0, jobs=1): err= 0: pid=2024007: Wed Nov 27 05:57:04 2024 00:35:18.158 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10019msec) 00:35:18.158 slat (usec): min=8, max=125, avg=43.66, stdev=17.42 00:35:18.158 clat (usec): min=8863, max=31973, avg=29561.08, stdev=1929.08 00:35:18.158 lat (usec): min=8874, max=31991, avg=29604.75, stdev=1927.50 00:35:18.158 clat percentiles (usec): 00:35:18.158 | 1.00th=[21365], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.158 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:35:18.158 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.158 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:18.158 | 99.99th=[31851] 00:35:18.158 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2137.60, stdev=93.78, samples=20 00:35:18.159 iops : min= 512, max= 576, avg=534.40, stdev=23.45, samples=20 00:35:18.159 lat (msec) : 10=0.30%, 20=0.60%, 50=99.10% 00:35:18.159 cpu : usr=98.62%, sys=0.91%, ctx=67, majf=0, minf=35 00:35:18.159 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.159 filename2: (groupid=0, jobs=1): err= 0: pid=2024008: Wed Nov 27 05:57:04 2024 00:35:18.159 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:35:18.159 slat (nsec): min=7926, max=95237, avg=39453.43, stdev=15847.24 00:35:18.159 clat (usec): min=12293, max=48629, avg=29689.72, stdev=1821.60 00:35:18.159 lat (usec): min=12321, max=48645, avg=29729.17, stdev=1816.92 00:35:18.159 clat percentiles (usec): 00:35:18.159 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.159 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:18.159 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.159 | 99.00th=[31327], 99.50th=[31589], 99.90th=[48497], 99.95th=[48497], 00:35:18.159 | 99.99th=[48497] 00:35:18.159 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2115.11, stdev=98.33, samples=19 00:35:18.159 iops : min= 480, max= 576, avg=528.74, stdev=24.51, samples=19 00:35:18.159 lat (msec) : 20=0.60%, 50=99.40% 00:35:18.159 cpu : usr=98.81%, sys=0.79%, ctx=14, majf=0, minf=28 00:35:18.159 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.159 filename2: (groupid=0, jobs=1): err= 0: pid=2024009: Wed Nov 27 05:57:04 2024 00:35:18.159 read: IOPS=545, BW=2183KiB/s (2235kB/s)(21.3MiB/10010msec) 00:35:18.159 slat (nsec): min=6476, max=99331, avg=37644.93, stdev=18434.87 00:35:18.159 clat (usec): min=1848, max=32005, avg=29002.98, stdev=4331.58 00:35:18.159 lat (usec): min=1870, max=32040, avg=29040.62, stdev=4334.89 00:35:18.159 clat percentiles (usec): 00:35:18.159 | 1.00th=[ 2802], 5.00th=[27919], 10.00th=[28443], 20.00th=[28705], 00:35:18.159 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:18.159 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.159 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:18.159 | 99.99th=[32113] 00:35:18.159 bw ( KiB/s): min= 2048, max= 3121, per=4.25%, avg=2178.45, stdev=237.66, samples=20 00:35:18.159 iops : min= 512, max= 780, avg=544.60, stdev=59.36, samples=20 00:35:18.159 lat (msec) : 2=0.13%, 4=1.74%, 10=0.60%, 20=0.88%, 50=96.65% 00:35:18.159 cpu : usr=98.43%, sys=0.97%, ctx=86, majf=0, minf=42 00:35:18.159 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:18.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 issued rwts: total=5463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.159 filename2: (groupid=0, jobs=1): err= 0: pid=2024010: Wed Nov 27 05:57:04 2024 00:35:18.159 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10011msec) 00:35:18.159 slat (nsec): min=5050, max=98065, avg=39381.73, stdev=15609.21 00:35:18.159 clat (usec): min=18844, max=45473, avg=29744.16, stdev=1277.29 00:35:18.159 lat (usec): min=18889, max=45491, avg=29783.54, stdev=1269.68 00:35:18.159 clat percentiles (usec): 00:35:18.159 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.159 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:18.159 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.159 | 99.00th=[31327], 99.50th=[31851], 99.90th=[36439], 99.95th=[36439], 00:35:18.159 | 99.99th=[45351] 00:35:18.159 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2124.80, stdev=87.11, samples=20 00:35:18.159 iops : min= 512, max= 576, avg=531.20, stdev=21.78, samples=20 00:35:18.159 lat (msec) : 20=0.30%, 50=99.70% 00:35:18.159 cpu : usr=98.68%, sys=0.93%, ctx=13, majf=0, minf=29 00:35:18.159 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.159 filename2: (groupid=0, jobs=1): err= 0: pid=2024011: Wed Nov 27 05:57:04 2024 00:35:18.159 read: IOPS=530, BW=2122KiB/s (2173kB/s)(20.8MiB/10013msec) 00:35:18.159 slat (nsec): min=3877, max=88324, avg=40027.18, stdev=15163.45 00:35:18.159 clat (usec): min=12302, max=53628, avg=29770.76, stdev=1778.68 00:35:18.159 lat (usec): min=12339, max=53640, avg=29810.79, stdev=1772.78 00:35:18.159 clat percentiles (usec): 00:35:18.159 | 1.00th=[27919], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.159 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:18.159 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.159 | 99.00th=[31327], 99.50th=[31851], 99.90th=[53740], 99.95th=[53740], 00:35:18.159 | 99.99th=[53740] 00:35:18.159 bw ( KiB/s): min= 1920, max= 2308, per=4.13%, avg=2115.32, stdev=99.12, samples=19 00:35:18.159 iops : min= 480, max= 577, avg=528.79, stdev=24.76, samples=19 00:35:18.159 lat (msec) : 20=0.32%, 50=99.38%, 100=0.30% 00:35:18.159 cpu : usr=98.54%, sys=1.07%, ctx=16, majf=0, minf=21 00:35:18.159 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:18.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 issued rwts: total=5313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.159 filename2: (groupid=0, jobs=1): err= 0: pid=2024012: Wed Nov 27 05:57:04 2024 00:35:18.159 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10012msec) 00:35:18.159 slat (usec): min=3, max=159, avg=41.02, stdev=18.62 00:35:18.159 clat (usec): min=12233, max=51590, avg=29682.06, stdev=1934.89 00:35:18.159 lat (usec): min=12241, max=51602, avg=29723.08, stdev=1928.82 00:35:18.159 clat percentiles (usec): 00:35:18.159 | 1.00th=[27395], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.159 | 30.00th=[28705], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:35:18.159 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.159 | 99.00th=[31065], 99.50th=[31851], 99.90th=[51643], 99.95th=[51643], 00:35:18.159 | 99.99th=[51643] 00:35:18.159 bw ( KiB/s): min= 1923, max= 2304, per=4.13%, avg=2115.95, stdev=98.81, samples=19 00:35:18.159 iops : min= 480, max= 576, avg=528.95, stdev=24.78, samples=19 00:35:18.159 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:35:18.159 cpu : usr=98.50%, sys=1.00%, ctx=57, majf=0, minf=38 00:35:18.159 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.159 filename2: (groupid=0, jobs=1): err= 0: pid=2024013: Wed Nov 27 05:57:04 2024 00:35:18.159 read: IOPS=534, BW=2140KiB/s (2191kB/s)(20.9MiB/10019msec) 00:35:18.159 slat (usec): min=5, max=319, avg=47.29, stdev=19.40 00:35:18.159 clat (usec): min=4580, max=31921, avg=29491.55, stdev=1996.91 00:35:18.159 lat (usec): min=4597, max=31961, avg=29538.84, stdev=1993.47 00:35:18.159 clat percentiles (usec): 00:35:18.159 | 1.00th=[20579], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:35:18.159 | 30.00th=[28705], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:35:18.159 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:35:18.159 | 99.00th=[31065], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 00:35:18.159 | 99.99th=[31851] 00:35:18.159 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2137.60, stdev=93.78, samples=20 00:35:18.159 iops : min= 512, max= 576, avg=534.40, stdev=23.45, samples=20 00:35:18.159 lat (msec) : 10=0.30%, 20=0.60%, 50=99.10% 00:35:18.159 cpu : usr=98.67%, sys=0.93%, ctx=13, majf=0, minf=23 00:35:18.159 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:18.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.159 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:18.159 00:35:18.159 Run status group 0 (all jobs): 00:35:18.159 READ: bw=50.0MiB/s (52.4MB/s), 2122KiB/s-2183KiB/s (2173kB/s-2235kB/s), io=501MiB (525MB), run=10004-10019msec 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:18.159 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 bdev_null0 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 [2024-11-27 05:57:05.042438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 bdev_null1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:18.160 { 00:35:18.160 "params": { 00:35:18.160 "name": "Nvme$subsystem", 00:35:18.160 "trtype": "$TEST_TRANSPORT", 00:35:18.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.160 "adrfam": "ipv4", 00:35:18.160 "trsvcid": "$NVMF_PORT", 00:35:18.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.160 "hdgst": ${hdgst:-false}, 00:35:18.160 "ddgst": ${ddgst:-false} 00:35:18.160 }, 00:35:18.160 "method": "bdev_nvme_attach_controller" 00:35:18.160 } 00:35:18.160 EOF 00:35:18.160 )") 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:18.160 { 00:35:18.160 "params": { 00:35:18.160 "name": "Nvme$subsystem", 00:35:18.160 "trtype": "$TEST_TRANSPORT", 00:35:18.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.160 "adrfam": "ipv4", 00:35:18.160 "trsvcid": "$NVMF_PORT", 00:35:18.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.160 "hdgst": ${hdgst:-false}, 00:35:18.160 "ddgst": ${ddgst:-false} 00:35:18.160 }, 00:35:18.160 "method": "bdev_nvme_attach_controller" 00:35:18.160 } 00:35:18.160 EOF 00:35:18.160 )") 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:18.160 05:57:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:18.160 "params": { 00:35:18.160 "name": "Nvme0", 00:35:18.160 "trtype": "tcp", 00:35:18.160 "traddr": "10.0.0.2", 00:35:18.160 "adrfam": "ipv4", 00:35:18.160 "trsvcid": "4420", 00:35:18.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.160 "hdgst": false, 00:35:18.160 "ddgst": false 00:35:18.161 }, 00:35:18.161 "method": "bdev_nvme_attach_controller" 00:35:18.161 },{ 00:35:18.161 "params": { 00:35:18.161 "name": "Nvme1", 00:35:18.161 "trtype": "tcp", 00:35:18.161 "traddr": "10.0.0.2", 00:35:18.161 "adrfam": "ipv4", 00:35:18.161 "trsvcid": "4420", 00:35:18.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:18.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:18.161 "hdgst": false, 00:35:18.161 "ddgst": false 00:35:18.161 }, 00:35:18.161 "method": "bdev_nvme_attach_controller" 00:35:18.161 }' 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:18.161 05:57:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.161 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:18.161 ... 00:35:18.161 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:18.161 ... 00:35:18.161 fio-3.35 00:35:18.161 Starting 4 threads 00:35:23.429 00:35:23.429 filename0: (groupid=0, jobs=1): err= 0: pid=2026050: Wed Nov 27 05:57:11 2024 00:35:23.429 read: IOPS=2480, BW=19.4MiB/s (20.3MB/s)(96.9MiB/5001msec) 00:35:23.429 slat (nsec): min=6223, max=87303, avg=16395.23, stdev=11179.26 00:35:23.429 clat (usec): min=688, max=7725, avg=3175.44, stdev=551.24 00:35:23.429 lat (usec): min=711, max=7756, avg=3191.83, stdev=551.82 00:35:23.429 clat percentiles (usec): 00:35:23.429 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2835], 00:35:23.429 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3195], 00:35:23.429 | 70.00th=[ 3326], 80.00th=[ 3523], 90.00th=[ 3785], 95.00th=[ 4146], 00:35:23.429 | 99.00th=[ 5080], 99.50th=[ 5407], 99.90th=[ 6259], 99.95th=[ 6652], 00:35:23.429 | 99.99th=[ 7701] 00:35:23.429 bw ( KiB/s): min=17760, max=20928, per=24.18%, avg=19776.00, stdev=919.44, samples=9 00:35:23.429 iops : min= 2220, max= 2616, avg=2472.00, stdev=114.93, samples=9 00:35:23.429 lat (usec) : 750=0.02%, 1000=0.02% 00:35:23.429 lat (msec) : 2=0.68%, 4=92.37%, 10=6.92% 00:35:23.429 cpu : usr=97.22%, sys=2.26%, ctx=50, majf=0, minf=9 00:35:23.429 IO depths : 1=0.2%, 2=5.1%, 4=66.0%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 issued rwts: total=12407,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.429 filename0: (groupid=0, jobs=1): err= 0: pid=2026051: Wed Nov 27 05:57:11 2024 00:35:23.429 read: IOPS=2679, BW=20.9MiB/s (22.0MB/s)(105MiB/5002msec) 00:35:23.429 slat (nsec): min=6108, max=61942, avg=12631.76, stdev=7359.29 00:35:23.429 clat (usec): min=631, max=5918, avg=2945.58, stdev=440.70 00:35:23.429 lat (usec): min=654, max=5930, avg=2958.21, stdev=441.24 00:35:23.429 clat percentiles (usec): 00:35:23.429 | 1.00th=[ 1860], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2606], 00:35:23.429 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3032], 00:35:23.429 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3621], 00:35:23.429 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 4883], 99.95th=[ 5145], 00:35:23.429 | 99.99th=[ 5735] 00:35:23.429 bw ( KiB/s): min=19808, max=22589, per=26.11%, avg=21357.89, stdev=985.27, samples=9 00:35:23.429 iops : min= 2476, max= 2823, avg=2669.67, stdev=123.06, samples=9 00:35:23.429 lat (usec) : 750=0.01%, 1000=0.01% 00:35:23.429 lat (msec) : 2=1.57%, 4=96.93%, 10=1.48% 00:35:23.429 cpu : usr=97.12%, sys=2.50%, ctx=6, majf=0, minf=9 00:35:23.429 IO depths : 1=0.4%, 2=9.2%, 4=61.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 issued rwts: total=13405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.429 filename1: (groupid=0, jobs=1): err= 0: pid=2026052: Wed Nov 27 05:57:11 2024 00:35:23.429 read: IOPS=2542, BW=19.9MiB/s (20.8MB/s)(99.3MiB/5002msec) 00:35:23.429 slat (nsec): min=6131, max=64466, avg=12242.86, stdev=7273.79 00:35:23.429 clat (usec): min=640, max=6245, avg=3110.81, stdev=509.30 00:35:23.429 lat (usec): min=651, max=6254, avg=3123.05, stdev=509.35 00:35:23.429 clat percentiles (usec): 00:35:23.429 | 1.00th=[ 1942], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2769], 00:35:23.429 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3163], 00:35:23.429 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3687], 95.00th=[ 3982], 00:35:23.429 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5538], 99.95th=[ 5866], 00:35:23.429 | 99.99th=[ 6194] 00:35:23.429 bw ( KiB/s): min=17936, max=21760, per=24.80%, avg=20282.67, stdev=1138.93, samples=9 00:35:23.429 iops : min= 2242, max= 2720, avg=2535.33, stdev=142.37, samples=9 00:35:23.429 lat (usec) : 750=0.02% 00:35:23.429 lat (msec) : 2=1.34%, 4=93.82%, 10=4.83% 00:35:23.429 cpu : usr=97.12%, sys=2.52%, ctx=10, majf=0, minf=9 00:35:23.429 IO depths : 1=0.4%, 2=4.5%, 4=67.3%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 issued rwts: total=12716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.429 filename1: (groupid=0, jobs=1): err= 0: pid=2026053: Wed Nov 27 05:57:11 2024 00:35:23.429 read: IOPS=2521, BW=19.7MiB/s (20.7MB/s)(98.5MiB/5001msec) 00:35:23.429 slat (nsec): min=6157, max=62248, avg=12287.50, stdev=7199.33 00:35:23.429 clat (usec): min=986, max=5874, avg=3136.88, stdev=491.07 00:35:23.429 lat (usec): min=993, max=5915, avg=3149.17, stdev=491.06 00:35:23.429 clat percentiles (usec): 00:35:23.429 | 1.00th=[ 2040], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2802], 00:35:23.429 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3097], 60.00th=[ 3195], 00:35:23.429 | 70.00th=[ 3294], 80.00th=[ 3458], 90.00th=[ 3687], 95.00th=[ 4015], 00:35:23.429 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 5407], 99.95th=[ 5604], 00:35:23.429 | 99.99th=[ 5866] 00:35:23.429 bw ( KiB/s): min=18640, max=21264, per=24.61%, avg=20127.11, stdev=855.89, samples=9 00:35:23.429 iops : min= 2330, max= 2658, avg=2515.89, stdev=106.99, samples=9 00:35:23.429 lat (usec) : 1000=0.02% 00:35:23.429 lat (msec) : 2=0.85%, 4=94.15%, 10=4.99% 00:35:23.429 cpu : usr=97.26%, sys=2.38%, ctx=6, majf=0, minf=9 00:35:23.429 IO depths : 1=0.4%, 2=5.0%, 4=66.8%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.429 issued rwts: total=12608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.429 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:23.429 00:35:23.429 Run status group 0 (all jobs): 00:35:23.429 READ: bw=79.9MiB/s (83.7MB/s), 19.4MiB/s-20.9MiB/s (20.3MB/s-22.0MB/s), io=400MiB (419MB), run=5001-5002msec 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:23.429 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.430 00:35:23.430 real 0m24.561s 00:35:23.430 user 4m53.251s 00:35:23.430 sys 0m4.702s 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.430 05:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:23.430 ************************************ 00:35:23.430 END TEST fio_dif_rand_params 00:35:23.430 ************************************ 00:35:23.430 05:57:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:23.430 05:57:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:23.430 05:57:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.430 05:57:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:23.430 ************************************ 00:35:23.430 START TEST fio_dif_digest 00:35:23.430 ************************************ 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.430 bdev_null0 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.430 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:23.690 [2024-11-27 05:57:11.447434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:23.690 { 00:35:23.690 "params": { 00:35:23.690 "name": "Nvme$subsystem", 00:35:23.690 "trtype": "$TEST_TRANSPORT", 00:35:23.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:23.690 "adrfam": "ipv4", 00:35:23.690 "trsvcid": "$NVMF_PORT", 00:35:23.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:23.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:23.690 "hdgst": ${hdgst:-false}, 00:35:23.690 "ddgst": ${ddgst:-false} 00:35:23.690 }, 00:35:23.690 "method": "bdev_nvme_attach_controller" 00:35:23.690 } 00:35:23.690 EOF 00:35:23.690 )") 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:23.690 "params": { 00:35:23.690 "name": "Nvme0", 00:35:23.690 "trtype": "tcp", 00:35:23.690 "traddr": "10.0.0.2", 00:35:23.690 "adrfam": "ipv4", 00:35:23.690 "trsvcid": "4420", 00:35:23.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:23.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:23.690 "hdgst": true, 00:35:23.690 "ddgst": true 00:35:23.690 }, 00:35:23.690 "method": "bdev_nvme_attach_controller" 00:35:23.690 }' 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:23.690 05:57:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:23.949 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:23.949 ... 00:35:23.949 fio-3.35 00:35:23.949 Starting 3 threads 00:35:36.156 00:35:36.157 filename0: (groupid=0, jobs=1): err= 0: pid=2027630: Wed Nov 27 05:57:22 2024 00:35:36.157 read: IOPS=281, BW=35.2MiB/s (36.9MB/s)(352MiB/10007msec) 00:35:36.157 slat (nsec): min=6674, max=76033, avg=25413.16, stdev=7730.81 00:35:36.157 clat (usec): min=8044, max=17323, avg=10636.20, stdev=905.25 00:35:36.157 lat (usec): min=8075, max=17352, avg=10661.61, stdev=904.60 00:35:36.157 clat percentiles (usec): 00:35:36.157 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:35:36.157 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:35:36.157 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12387], 00:35:36.157 | 99.00th=[13304], 99.50th=[13566], 99.90th=[17171], 99.95th=[17433], 00:35:36.157 | 99.99th=[17433] 00:35:36.157 bw ( KiB/s): min=30208, max=37376, per=34.71%, avg=36006.40, stdev=1842.82, samples=20 00:35:36.157 iops : min= 236, max= 292, avg=281.30, stdev=14.40, samples=20 00:35:36.157 lat (msec) : 10=21.71%, 20=78.29% 00:35:36.157 cpu : usr=96.80%, sys=2.86%, ctx=21, majf=0, minf=9 00:35:36.157 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.157 issued rwts: total=2815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.157 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.157 filename0: (groupid=0, jobs=1): err= 0: pid=2027631: Wed Nov 27 05:57:22 2024 00:35:36.157 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(341MiB/10044msec) 00:35:36.157 slat (nsec): min=6422, max=40146, avg=17215.63, stdev=6882.65 00:35:36.157 clat (usec): min=8363, max=51380, avg=11014.48, stdev=1420.63 00:35:36.157 lat (usec): min=8391, max=51389, avg=11031.69, stdev=1420.64 00:35:36.157 clat percentiles (usec): 00:35:36.157 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:35:36.157 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:35:36.157 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12911], 00:35:36.157 | 99.00th=[13960], 99.50th=[14615], 99.90th=[16909], 99.95th=[49546], 00:35:36.157 | 99.99th=[51119] 00:35:36.157 bw ( KiB/s): min=29440, max=36352, per=33.63%, avg=34880.00, stdev=1906.48, samples=20 00:35:36.157 iops : min= 230, max= 284, avg=272.50, stdev=14.89, samples=20 00:35:36.157 lat (msec) : 10=11.26%, 20=88.67%, 50=0.04%, 100=0.04% 00:35:36.157 cpu : usr=95.35%, sys=4.28%, ctx=17, majf=0, minf=11 00:35:36.157 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.157 issued rwts: total=2727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.157 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.157 filename0: (groupid=0, jobs=1): err= 0: pid=2027632: Wed Nov 27 05:57:22 2024 00:35:36.157 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(325MiB/10004msec) 00:35:36.157 slat (nsec): min=6308, max=47147, avg=16539.09, stdev=6724.04 00:35:36.157 clat (usec): min=5207, max=16331, avg=11537.42, stdev=974.07 00:35:36.157 lat (usec): min=5216, max=16345, avg=11553.96, stdev=973.90 00:35:36.157 clat percentiles (usec): 00:35:36.157 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10421], 20.00th=[10814], 00:35:36.157 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:35:36.157 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12780], 95.00th=[13435], 00:35:36.157 | 99.00th=[14615], 99.50th=[15270], 99.90th=[15926], 99.95th=[15926], 00:35:36.157 | 99.99th=[16319] 00:35:36.157 bw ( KiB/s): min=27904, max=34560, per=31.99%, avg=33185.68, stdev=1732.52, samples=19 00:35:36.157 iops : min= 218, max= 270, avg=259.26, stdev=13.54, samples=19 00:35:36.157 lat (msec) : 10=2.62%, 20=97.38% 00:35:36.157 cpu : usr=96.15%, sys=3.50%, ctx=17, majf=0, minf=12 00:35:36.157 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.157 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.157 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:36.157 00:35:36.157 Run status group 0 (all jobs): 00:35:36.157 READ: bw=101MiB/s (106MB/s), 32.4MiB/s-35.2MiB/s (34.0MB/s-36.9MB/s), io=1017MiB (1067MB), run=10004-10044msec 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.157 00:35:36.157 real 0m11.256s 00:35:36.157 user 0m36.021s 00:35:36.157 sys 0m1.449s 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:36.157 05:57:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:36.157 ************************************ 00:35:36.157 END TEST fio_dif_digest 00:35:36.157 ************************************ 00:35:36.157 05:57:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:36.157 05:57:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:36.157 rmmod nvme_tcp 00:35:36.157 rmmod nvme_fabrics 00:35:36.157 rmmod nvme_keyring 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2018506 ']' 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2018506 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2018506 ']' 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2018506 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018506 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018506' 00:35:36.157 killing process with pid 2018506 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2018506 00:35:36.157 05:57:22 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2018506 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:36.157 05:57:22 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:38.060 Waiting for block devices as requested 00:35:38.060 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:38.060 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:38.060 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:38.060 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:38.060 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:38.319 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:38.319 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:38.319 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:38.578 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:38.578 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:38.578 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:38.578 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:38.837 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:38.837 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:38.837 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:39.107 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:39.107 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.107 05:57:27 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.107 05:57:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:39.107 05:57:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.634 05:57:29 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:41.634 00:35:41.634 real 1m15.156s 00:35:41.634 user 7m12.951s 00:35:41.634 sys 0m19.865s 00:35:41.634 05:57:29 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.634 05:57:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:41.634 ************************************ 00:35:41.634 END TEST nvmf_dif 00:35:41.635 ************************************ 00:35:41.635 05:57:29 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:41.635 05:57:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:41.635 05:57:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:41.635 05:57:29 -- common/autotest_common.sh@10 -- # set +x 00:35:41.635 ************************************ 00:35:41.635 START TEST nvmf_abort_qd_sizes 00:35:41.635 ************************************ 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:41.635 * Looking for test storage... 00:35:41.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.635 --rc genhtml_branch_coverage=1 00:35:41.635 --rc genhtml_function_coverage=1 00:35:41.635 --rc genhtml_legend=1 00:35:41.635 --rc geninfo_all_blocks=1 00:35:41.635 --rc geninfo_unexecuted_blocks=1 00:35:41.635 00:35:41.635 ' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.635 --rc genhtml_branch_coverage=1 00:35:41.635 --rc genhtml_function_coverage=1 00:35:41.635 --rc genhtml_legend=1 00:35:41.635 --rc geninfo_all_blocks=1 00:35:41.635 --rc geninfo_unexecuted_blocks=1 00:35:41.635 00:35:41.635 ' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.635 --rc genhtml_branch_coverage=1 00:35:41.635 --rc genhtml_function_coverage=1 00:35:41.635 --rc genhtml_legend=1 00:35:41.635 --rc geninfo_all_blocks=1 00:35:41.635 --rc geninfo_unexecuted_blocks=1 00:35:41.635 00:35:41.635 ' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:41.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.635 --rc genhtml_branch_coverage=1 00:35:41.635 --rc genhtml_function_coverage=1 00:35:41.635 --rc genhtml_legend=1 00:35:41.635 --rc geninfo_all_blocks=1 00:35:41.635 --rc geninfo_unexecuted_blocks=1 00:35:41.635 00:35:41.635 ' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:41.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:41.635 05:57:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:48.204 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:48.204 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:48.204 Found net devices under 0000:86:00.0: cvl_0_0 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:48.204 Found net devices under 0000:86:00.1: cvl_0_1 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:48.204 05:57:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:48.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:48.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:35:48.204 00:35:48.204 --- 10.0.0.2 ping statistics --- 00:35:48.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.204 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:48.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:48.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:35:48.204 00:35:48.204 --- 10.0.0.1 ping statistics --- 00:35:48.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:48.204 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:48.204 05:57:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:50.110 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:50.111 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:50.111 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:50.111 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:50.111 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:50.111 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:50.111 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:50.111 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:50.370 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:51.748 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2035433 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2035433 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2035433 ']' 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:51.748 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:51.748 [2024-11-27 05:57:39.701656] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:35:51.748 [2024-11-27 05:57:39.701705] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.008 [2024-11-27 05:57:39.781724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:52.008 [2024-11-27 05:57:39.825728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.008 [2024-11-27 05:57:39.825767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.008 [2024-11-27 05:57:39.825775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.008 [2024-11-27 05:57:39.825781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.008 [2024-11-27 05:57:39.825786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.008 [2024-11-27 05:57:39.827239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.008 [2024-11-27 05:57:39.827349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:52.008 [2024-11-27 05:57:39.827379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:52.008 [2024-11-27 05:57:39.827380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.008 05:57:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:52.269 ************************************ 00:35:52.269 START TEST spdk_target_abort 00:35:52.269 ************************************ 00:35:52.269 05:57:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:52.269 05:57:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:52.269 05:57:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:52.269 05:57:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.269 05:57:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.558 spdk_targetn1 00:35:55.558 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.558 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:55.558 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.558 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.558 [2024-11-27 05:57:42.846575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:55.559 [2024-11-27 05:57:42.891627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:55.559 05:57:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:58.093 Initializing NVMe Controllers 00:35:58.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:58.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:58.093 Initialization complete. Launching workers. 00:35:58.093 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15693, failed: 0 00:35:58.093 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1313, failed to submit 14380 00:35:58.093 success 679, unsuccessful 634, failed 0 00:35:58.093 05:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:58.093 05:57:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:02.293 Initializing NVMe Controllers 00:36:02.293 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:02.293 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:02.293 Initialization complete. Launching workers. 00:36:02.293 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8520, failed: 0 00:36:02.293 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1260, failed to submit 7260 00:36:02.293 success 319, unsuccessful 941, failed 0 00:36:02.293 05:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:02.293 05:57:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.827 Initializing NVMe Controllers 00:36:04.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:04.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:04.827 Initialization complete. Launching workers. 00:36:04.827 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38431, failed: 0 00:36:04.827 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2914, failed to submit 35517 00:36:04.827 success 588, unsuccessful 2326, failed 0 00:36:04.827 05:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:04.827 05:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.827 05:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:04.827 05:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.828 05:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:04.828 05:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.828 05:57:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2035433 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2035433 ']' 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2035433 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2035433 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2035433' 00:36:06.732 killing process with pid 2035433 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2035433 00:36:06.732 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2035433 00:36:06.991 00:36:06.991 real 0m14.761s 00:36:06.991 user 0m56.263s 00:36:06.991 sys 0m2.698s 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.991 ************************************ 00:36:06.991 END TEST spdk_target_abort 00:36:06.991 ************************************ 00:36:06.991 05:57:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:06.991 05:57:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:06.991 05:57:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.991 05:57:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:06.991 ************************************ 00:36:06.991 START TEST kernel_target_abort 00:36:06.991 ************************************ 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:06.991 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:06.992 05:57:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:10.279 Waiting for block devices as requested 00:36:10.279 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:10.279 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:10.279 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:10.279 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:10.279 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:10.279 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:10.279 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:10.279 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:10.279 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:10.538 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:10.538 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:10.538 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:10.797 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:10.797 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:10.797 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:10.797 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:11.057 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:11.057 05:57:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:11.057 No valid GPT data, bailing 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:11.057 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:11.315 00:36:11.315 Discovery Log Number of Records 2, Generation counter 2 00:36:11.315 =====Discovery Log Entry 0====== 00:36:11.315 trtype: tcp 00:36:11.315 adrfam: ipv4 00:36:11.315 subtype: current discovery subsystem 00:36:11.315 treq: not specified, sq flow control disable supported 00:36:11.315 portid: 1 00:36:11.315 trsvcid: 4420 00:36:11.315 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:11.315 traddr: 10.0.0.1 00:36:11.315 eflags: none 00:36:11.315 sectype: none 00:36:11.315 =====Discovery Log Entry 1====== 00:36:11.315 trtype: tcp 00:36:11.315 adrfam: ipv4 00:36:11.315 subtype: nvme subsystem 00:36:11.315 treq: not specified, sq flow control disable supported 00:36:11.315 portid: 1 00:36:11.315 trsvcid: 4420 00:36:11.315 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:11.315 traddr: 10.0.0.1 00:36:11.315 eflags: none 00:36:11.315 sectype: none 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:11.315 05:57:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:14.603 Initializing NVMe Controllers 00:36:14.603 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:14.603 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:14.603 Initialization complete. Launching workers. 00:36:14.603 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93836, failed: 0 00:36:14.603 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93836, failed to submit 0 00:36:14.603 success 0, unsuccessful 93836, failed 0 00:36:14.603 05:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:14.603 05:58:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.895 Initializing NVMe Controllers 00:36:17.895 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:17.895 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:17.895 Initialization complete. Launching workers. 00:36:17.895 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150007, failed: 0 00:36:17.895 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37826, failed to submit 112181 00:36:17.895 success 0, unsuccessful 37826, failed 0 00:36:17.895 05:58:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:17.895 05:58:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.184 Initializing NVMe Controllers 00:36:21.184 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:21.184 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:21.184 Initialization complete. Launching workers. 00:36:21.184 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141942, failed: 0 00:36:21.184 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35562, failed to submit 106380 00:36:21.184 success 0, unsuccessful 35562, failed 0 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:21.184 05:58:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:23.719 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:23.719 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:25.097 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:25.097 00:36:25.097 real 0m18.069s 00:36:25.097 user 0m9.160s 00:36:25.097 sys 0m5.072s 00:36:25.097 05:58:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.097 05:58:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:25.097 ************************************ 00:36:25.097 END TEST kernel_target_abort 00:36:25.097 ************************************ 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:25.097 05:58:12 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:25.097 rmmod nvme_tcp 00:36:25.097 rmmod nvme_fabrics 00:36:25.097 rmmod nvme_keyring 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2035433 ']' 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2035433 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2035433 ']' 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2035433 00:36:25.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2035433) - No such process 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2035433 is not found' 00:36:25.097 Process with pid 2035433 is not found 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:25.097 05:58:13 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:28.387 Waiting for block devices as requested 00:36:28.387 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:28.387 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:28.387 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:28.387 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:28.387 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:28.387 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:28.387 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:28.387 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:28.645 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:28.645 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:28.645 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:28.645 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:28.903 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:28.903 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:28.903 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:29.161 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:29.161 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:29.161 05:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.696 05:58:19 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:31.696 00:36:31.696 real 0m49.986s 00:36:31.696 user 1m9.705s 00:36:31.696 sys 0m16.567s 00:36:31.696 05:58:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.696 05:58:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:31.696 ************************************ 00:36:31.696 END TEST nvmf_abort_qd_sizes 00:36:31.696 ************************************ 00:36:31.696 05:58:19 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:31.696 05:58:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:31.696 05:58:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.696 05:58:19 -- common/autotest_common.sh@10 -- # set +x 00:36:31.696 ************************************ 00:36:31.696 START TEST keyring_file 00:36:31.696 ************************************ 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:31.696 * Looking for test storage... 00:36:31.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:31.696 05:58:19 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:31.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.696 --rc genhtml_branch_coverage=1 00:36:31.696 --rc genhtml_function_coverage=1 00:36:31.696 --rc genhtml_legend=1 00:36:31.696 --rc geninfo_all_blocks=1 00:36:31.696 --rc geninfo_unexecuted_blocks=1 00:36:31.696 00:36:31.696 ' 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:31.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.696 --rc genhtml_branch_coverage=1 00:36:31.696 --rc genhtml_function_coverage=1 00:36:31.696 --rc genhtml_legend=1 00:36:31.696 --rc geninfo_all_blocks=1 00:36:31.696 --rc geninfo_unexecuted_blocks=1 00:36:31.696 00:36:31.696 ' 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:31.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.696 --rc genhtml_branch_coverage=1 00:36:31.696 --rc genhtml_function_coverage=1 00:36:31.696 --rc genhtml_legend=1 00:36:31.696 --rc geninfo_all_blocks=1 00:36:31.696 --rc geninfo_unexecuted_blocks=1 00:36:31.696 00:36:31.696 ' 00:36:31.696 05:58:19 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:31.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.697 --rc genhtml_branch_coverage=1 00:36:31.697 --rc genhtml_function_coverage=1 00:36:31.697 --rc genhtml_legend=1 00:36:31.697 --rc geninfo_all_blocks=1 00:36:31.697 --rc geninfo_unexecuted_blocks=1 00:36:31.697 00:36:31.697 ' 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:31.697 05:58:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:31.697 05:58:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.697 05:58:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.697 05:58:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.697 05:58:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.697 05:58:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.697 05:58:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.697 05:58:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:31.697 05:58:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:31.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Gb2znuJShs 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Gb2znuJShs 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Gb2znuJShs 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Gb2znuJShs 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uekWCfUPt7 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:31.697 05:58:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uekWCfUPt7 00:36:31.697 05:58:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uekWCfUPt7 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uekWCfUPt7 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=2044215 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2044215 00:36:31.697 05:58:19 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:31.697 05:58:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2044215 ']' 00:36:31.697 05:58:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.697 05:58:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:31.697 05:58:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.697 05:58:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:31.697 05:58:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:31.697 [2024-11-27 05:58:19.608270] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:36:31.697 [2024-11-27 05:58:19.608321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044215 ] 00:36:31.697 [2024-11-27 05:58:19.683012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.957 [2024-11-27 05:58:19.725586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:31.957 05:58:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:31.957 05:58:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:31.957 05:58:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:31.957 05:58:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.957 05:58:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:31.957 [2024-11-27 05:58:19.947541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.217 null0 00:36:32.217 [2024-11-27 05:58:19.979596] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:32.217 [2024-11-27 05:58:19.979805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:32.217 05:58:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.217 05:58:19 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:32.217 05:58:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:32.217 05:58:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:32.217 05:58:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:32.217 05:58:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:32.217 05:58:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.217 [2024-11-27 05:58:20.007663] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:32.217 request: 00:36:32.217 { 00:36:32.217 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:32.217 "secure_channel": false, 00:36:32.217 "listen_address": { 00:36:32.217 "trtype": "tcp", 00:36:32.217 "traddr": "127.0.0.1", 00:36:32.217 "trsvcid": "4420" 00:36:32.217 }, 00:36:32.217 "method": "nvmf_subsystem_add_listener", 00:36:32.217 "req_id": 1 00:36:32.217 } 00:36:32.217 Got JSON-RPC error response 00:36:32.217 response: 00:36:32.217 { 00:36:32.217 "code": -32602, 00:36:32.217 "message": "Invalid parameters" 00:36:32.217 } 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:32.217 05:58:20 keyring_file -- keyring/file.sh@47 -- # bperfpid=2044229 00:36:32.217 05:58:20 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:32.217 05:58:20 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2044229 /var/tmp/bperf.sock 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2044229 ']' 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:32.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.217 05:58:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.217 [2024-11-27 05:58:20.062168] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:36:32.217 [2024-11-27 05:58:20.062218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044229 ] 00:36:32.217 [2024-11-27 05:58:20.133240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.217 [2024-11-27 05:58:20.175833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.476 05:58:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:32.476 05:58:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:32.476 05:58:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:32.476 05:58:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:32.476 05:58:20 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uekWCfUPt7 00:36:32.476 05:58:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uekWCfUPt7 00:36:32.735 05:58:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:32.735 05:58:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:32.735 05:58:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:32.735 05:58:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:32.735 05:58:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.994 05:58:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Gb2znuJShs == \/\t\m\p\/\t\m\p\.\G\b\2\z\n\u\J\S\h\s ]] 00:36:32.994 05:58:20 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:32.994 05:58:20 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:32.994 05:58:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:32.994 05:58:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:32.994 05:58:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.253 05:58:21 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uekWCfUPt7 == \/\t\m\p\/\t\m\p\.\u\e\k\W\C\f\U\P\t\7 ]] 00:36:33.253 05:58:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.253 05:58:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:33.253 05:58:21 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.253 05:58:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:33.513 05:58:21 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:33.513 05:58:21 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.513 05:58:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.772 [2024-11-27 05:58:21.580007] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:33.772 nvme0n1 00:36:33.772 05:58:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:33.772 05:58:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:33.772 05:58:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:33.772 05:58:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.772 05:58:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.772 05:58:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:34.031 05:58:21 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:34.031 05:58:21 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:34.031 05:58:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:34.031 05:58:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:34.031 05:58:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.031 05:58:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:34.031 05:58:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.290 05:58:22 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:34.290 05:58:22 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:34.290 Running I/O for 1 seconds... 00:36:35.228 19392.00 IOPS, 75.75 MiB/s 00:36:35.228 Latency(us) 00:36:35.228 [2024-11-27T04:58:23.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.228 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:35.228 nvme0n1 : 1.00 19435.35 75.92 0.00 0.00 6573.63 2715.06 11671.65 00:36:35.228 [2024-11-27T04:58:23.232Z] =================================================================================================================== 00:36:35.228 [2024-11-27T04:58:23.232Z] Total : 19435.35 75.92 0.00 0.00 6573.63 2715.06 11671.65 00:36:35.228 { 00:36:35.228 "results": [ 00:36:35.228 { 00:36:35.228 "job": "nvme0n1", 00:36:35.228 "core_mask": "0x2", 00:36:35.228 "workload": "randrw", 00:36:35.228 "percentage": 50, 00:36:35.228 "status": "finished", 00:36:35.228 "queue_depth": 128, 00:36:35.228 "io_size": 4096, 00:36:35.228 "runtime": 1.004407, 00:36:35.228 "iops": 19435.348419515198, 00:36:35.228 "mibps": 75.91932976373124, 00:36:35.228 "io_failed": 0, 00:36:35.228 "io_timeout": 0, 00:36:35.228 "avg_latency_us": 6573.625328522885, 00:36:35.228 "min_latency_us": 2715.062857142857, 00:36:35.228 "max_latency_us": 11671.649523809523 00:36:35.228 } 00:36:35.228 ], 00:36:35.228 "core_count": 1 00:36:35.228 } 00:36:35.228 05:58:23 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:35.228 05:58:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:35.487 05:58:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:35.487 05:58:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.487 05:58:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.487 05:58:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.487 05:58:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.487 05:58:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.747 05:58:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:35.747 05:58:23 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:35.747 05:58:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:35.747 05:58:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.747 05:58:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.747 05:58:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.747 05:58:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.006 05:58:23 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:36.006 05:58:23 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:36.006 05:58:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:36.006 05:58:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:36.006 05:58:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:36.006 05:58:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.006 05:58:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:36.006 05:58:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:36.006 05:58:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:36.006 05:58:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:36.006 [2024-11-27 05:58:23.972136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:36.006 [2024-11-27 05:58:23.972775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193b210 (107): Transport endpoint is not connected 00:36:36.006 [2024-11-27 05:58:23.973769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193b210 (9): Bad file descriptor 00:36:36.006 [2024-11-27 05:58:23.974771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:36.006 [2024-11-27 05:58:23.974779] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:36.006 [2024-11-27 05:58:23.974787] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:36.006 [2024-11-27 05:58:23.974794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:36.006 request: 00:36:36.006 { 00:36:36.006 "name": "nvme0", 00:36:36.006 "trtype": "tcp", 00:36:36.006 "traddr": "127.0.0.1", 00:36:36.006 "adrfam": "ipv4", 00:36:36.006 "trsvcid": "4420", 00:36:36.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:36.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:36.006 "prchk_reftag": false, 00:36:36.006 "prchk_guard": false, 00:36:36.006 "hdgst": false, 00:36:36.006 "ddgst": false, 00:36:36.006 "psk": "key1", 00:36:36.006 "allow_unrecognized_csi": false, 00:36:36.006 "method": "bdev_nvme_attach_controller", 00:36:36.006 "req_id": 1 00:36:36.006 } 00:36:36.006 Got JSON-RPC error response 00:36:36.006 response: 00:36:36.006 { 00:36:36.006 "code": -5, 00:36:36.007 "message": "Input/output error" 00:36:36.007 } 00:36:36.007 05:58:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:36.007 05:58:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:36.007 05:58:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:36.007 05:58:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:36.007 05:58:23 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:36.007 05:58:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.007 05:58:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:36.007 05:58:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.007 05:58:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.007 05:58:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:36.265 05:58:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:36.265 05:58:24 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:36.265 05:58:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:36.266 05:58:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.266 05:58:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.266 05:58:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:36.266 05:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.525 05:58:24 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:36.525 05:58:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:36.525 05:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:36.784 05:58:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:36.784 05:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:36.784 05:58:24 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:36.784 05:58:24 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:36.784 05:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.044 05:58:24 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:37.044 05:58:24 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Gb2znuJShs 00:36:37.044 05:58:24 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:37.044 05:58:24 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:37.044 05:58:24 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:37.044 05:58:24 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:37.044 05:58:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.044 05:58:24 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:37.044 05:58:24 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.044 05:58:24 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:37.044 05:58:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:37.303 [2024-11-27 05:58:25.153953] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Gb2znuJShs': 0100660 00:36:37.303 [2024-11-27 05:58:25.153983] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:37.303 request: 00:36:37.303 { 00:36:37.303 "name": "key0", 00:36:37.303 "path": "/tmp/tmp.Gb2znuJShs", 00:36:37.303 "method": "keyring_file_add_key", 00:36:37.303 "req_id": 1 00:36:37.303 } 00:36:37.303 Got JSON-RPC error response 00:36:37.303 response: 00:36:37.303 { 00:36:37.303 "code": -1, 00:36:37.303 "message": "Operation not permitted" 00:36:37.303 } 00:36:37.303 05:58:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:37.303 05:58:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:37.303 05:58:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:37.303 05:58:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:37.303 05:58:25 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Gb2znuJShs 00:36:37.304 05:58:25 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:37.304 05:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Gb2znuJShs 00:36:37.563 05:58:25 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Gb2znuJShs 00:36:37.563 05:58:25 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:37.563 05:58:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:37.563 05:58:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.563 05:58:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.563 05:58:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:37.563 05:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.563 05:58:25 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:37.563 05:58:25 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.563 05:58:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:37.563 05:58:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.563 05:58:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:37.563 05:58:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.563 05:58:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:37.563 05:58:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:37.563 05:58:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.563 05:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.822 [2024-11-27 05:58:25.719446] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Gb2znuJShs': No such file or directory 00:36:37.822 [2024-11-27 05:58:25.719465] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:37.822 [2024-11-27 05:58:25.719480] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:37.822 [2024-11-27 05:58:25.719491] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:37.822 [2024-11-27 05:58:25.719498] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:37.822 [2024-11-27 05:58:25.719503] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:37.822 request: 00:36:37.822 { 00:36:37.822 "name": "nvme0", 00:36:37.822 "trtype": "tcp", 00:36:37.822 "traddr": "127.0.0.1", 00:36:37.822 "adrfam": "ipv4", 00:36:37.822 "trsvcid": "4420", 00:36:37.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.822 "prchk_reftag": false, 00:36:37.822 "prchk_guard": false, 00:36:37.822 "hdgst": false, 00:36:37.822 "ddgst": false, 00:36:37.822 "psk": "key0", 00:36:37.822 "allow_unrecognized_csi": false, 00:36:37.822 "method": "bdev_nvme_attach_controller", 00:36:37.822 "req_id": 1 00:36:37.822 } 00:36:37.822 Got JSON-RPC error response 00:36:37.822 response: 00:36:37.822 { 00:36:37.822 "code": -19, 00:36:37.822 "message": "No such device" 00:36:37.822 } 00:36:37.822 05:58:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:37.822 05:58:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:37.822 05:58:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:37.822 05:58:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:37.822 05:58:25 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:37.822 05:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:38.082 05:58:25 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BKSsVimRRj 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:38.082 05:58:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:38.082 05:58:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:38.082 05:58:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:38.082 05:58:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:38.082 05:58:25 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:38.082 05:58:25 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BKSsVimRRj 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BKSsVimRRj 00:36:38.082 05:58:25 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.BKSsVimRRj 00:36:38.082 05:58:25 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BKSsVimRRj 00:36:38.082 05:58:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BKSsVimRRj 00:36:38.342 05:58:26 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:38.343 05:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:38.602 nvme0n1 00:36:38.602 05:58:26 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:38.602 05:58:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.602 05:58:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.602 05:58:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.602 05:58:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:38.602 05:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.861 05:58:26 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:38.861 05:58:26 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:38.861 05:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:38.861 05:58:26 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:38.861 05:58:26 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:38.861 05:58:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.861 05:58:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:38.861 05:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.121 05:58:26 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:39.121 05:58:26 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:39.121 05:58:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:39.121 05:58:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:39.121 05:58:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.121 05:58:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:39.121 05:58:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.380 05:58:27 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:39.380 05:58:27 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:39.380 05:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:39.640 05:58:27 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:39.640 05:58:27 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:39.640 05:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.640 05:58:27 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:39.640 05:58:27 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BKSsVimRRj 00:36:39.640 05:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BKSsVimRRj 00:36:39.900 05:58:27 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uekWCfUPt7 00:36:39.900 05:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uekWCfUPt7 00:36:40.158 05:58:27 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.158 05:58:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.417 nvme0n1 00:36:40.417 05:58:28 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:40.417 05:58:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:40.676 05:58:28 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:40.676 "subsystems": [ 00:36:40.676 { 00:36:40.676 "subsystem": "keyring", 00:36:40.676 "config": [ 00:36:40.676 { 00:36:40.676 "method": "keyring_file_add_key", 00:36:40.676 "params": { 00:36:40.676 "name": "key0", 00:36:40.676 "path": "/tmp/tmp.BKSsVimRRj" 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "keyring_file_add_key", 00:36:40.676 "params": { 00:36:40.676 "name": "key1", 00:36:40.676 "path": "/tmp/tmp.uekWCfUPt7" 00:36:40.676 } 00:36:40.676 } 00:36:40.676 ] 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "subsystem": "iobuf", 00:36:40.676 "config": [ 00:36:40.676 { 00:36:40.676 "method": "iobuf_set_options", 00:36:40.676 "params": { 00:36:40.676 "small_pool_count": 8192, 00:36:40.676 "large_pool_count": 1024, 00:36:40.676 "small_bufsize": 8192, 00:36:40.676 "large_bufsize": 135168, 00:36:40.676 "enable_numa": false 00:36:40.676 } 00:36:40.676 } 00:36:40.676 ] 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "subsystem": "sock", 00:36:40.676 "config": [ 00:36:40.676 { 00:36:40.676 "method": "sock_set_default_impl", 00:36:40.676 "params": { 00:36:40.676 "impl_name": "posix" 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "sock_impl_set_options", 00:36:40.676 "params": { 00:36:40.676 "impl_name": "ssl", 00:36:40.676 "recv_buf_size": 4096, 00:36:40.676 "send_buf_size": 4096, 00:36:40.676 "enable_recv_pipe": true, 00:36:40.676 "enable_quickack": false, 00:36:40.676 "enable_placement_id": 0, 00:36:40.676 "enable_zerocopy_send_server": true, 00:36:40.676 "enable_zerocopy_send_client": false, 00:36:40.676 "zerocopy_threshold": 0, 00:36:40.676 "tls_version": 0, 00:36:40.676 "enable_ktls": false 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "sock_impl_set_options", 00:36:40.676 "params": { 00:36:40.676 "impl_name": "posix", 00:36:40.676 "recv_buf_size": 2097152, 00:36:40.676 "send_buf_size": 2097152, 00:36:40.676 "enable_recv_pipe": true, 00:36:40.676 "enable_quickack": false, 00:36:40.676 "enable_placement_id": 0, 00:36:40.676 "enable_zerocopy_send_server": true, 00:36:40.676 "enable_zerocopy_send_client": false, 00:36:40.676 "zerocopy_threshold": 0, 00:36:40.676 "tls_version": 0, 00:36:40.676 "enable_ktls": false 00:36:40.676 } 00:36:40.676 } 00:36:40.676 ] 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "subsystem": "vmd", 00:36:40.676 "config": [] 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "subsystem": "accel", 00:36:40.676 "config": [ 00:36:40.676 { 00:36:40.676 "method": "accel_set_options", 00:36:40.676 "params": { 00:36:40.676 "small_cache_size": 128, 00:36:40.676 "large_cache_size": 16, 00:36:40.676 "task_count": 2048, 00:36:40.676 "sequence_count": 2048, 00:36:40.676 "buf_count": 2048 00:36:40.676 } 00:36:40.676 } 00:36:40.676 ] 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "subsystem": "bdev", 00:36:40.676 "config": [ 00:36:40.676 { 00:36:40.676 "method": "bdev_set_options", 00:36:40.676 "params": { 00:36:40.676 "bdev_io_pool_size": 65535, 00:36:40.676 "bdev_io_cache_size": 256, 00:36:40.676 "bdev_auto_examine": true, 00:36:40.676 "iobuf_small_cache_size": 128, 00:36:40.676 "iobuf_large_cache_size": 16 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "bdev_raid_set_options", 00:36:40.676 "params": { 00:36:40.676 "process_window_size_kb": 1024, 00:36:40.676 "process_max_bandwidth_mb_sec": 0 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "bdev_iscsi_set_options", 00:36:40.676 "params": { 00:36:40.676 "timeout_sec": 30 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "bdev_nvme_set_options", 00:36:40.676 "params": { 00:36:40.676 "action_on_timeout": "none", 00:36:40.676 "timeout_us": 0, 00:36:40.676 "timeout_admin_us": 0, 00:36:40.676 "keep_alive_timeout_ms": 10000, 00:36:40.676 "arbitration_burst": 0, 00:36:40.676 "low_priority_weight": 0, 00:36:40.676 "medium_priority_weight": 0, 00:36:40.676 "high_priority_weight": 0, 00:36:40.676 "nvme_adminq_poll_period_us": 10000, 00:36:40.676 "nvme_ioq_poll_period_us": 0, 00:36:40.676 "io_queue_requests": 512, 00:36:40.676 "delay_cmd_submit": true, 00:36:40.676 "transport_retry_count": 4, 00:36:40.676 "bdev_retry_count": 3, 00:36:40.676 "transport_ack_timeout": 0, 00:36:40.676 "ctrlr_loss_timeout_sec": 0, 00:36:40.676 "reconnect_delay_sec": 0, 00:36:40.676 "fast_io_fail_timeout_sec": 0, 00:36:40.676 "disable_auto_failback": false, 00:36:40.676 "generate_uuids": false, 00:36:40.676 "transport_tos": 0, 00:36:40.676 "nvme_error_stat": false, 00:36:40.676 "rdma_srq_size": 0, 00:36:40.676 "io_path_stat": false, 00:36:40.676 "allow_accel_sequence": false, 00:36:40.676 "rdma_max_cq_size": 0, 00:36:40.676 "rdma_cm_event_timeout_ms": 0, 00:36:40.676 "dhchap_digests": [ 00:36:40.676 "sha256", 00:36:40.676 "sha384", 00:36:40.676 "sha512" 00:36:40.676 ], 00:36:40.676 "dhchap_dhgroups": [ 00:36:40.676 "null", 00:36:40.676 "ffdhe2048", 00:36:40.676 "ffdhe3072", 00:36:40.676 "ffdhe4096", 00:36:40.676 "ffdhe6144", 00:36:40.676 "ffdhe8192" 00:36:40.676 ] 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "bdev_nvme_attach_controller", 00:36:40.676 "params": { 00:36:40.676 "name": "nvme0", 00:36:40.676 "trtype": "TCP", 00:36:40.676 "adrfam": "IPv4", 00:36:40.676 "traddr": "127.0.0.1", 00:36:40.676 "trsvcid": "4420", 00:36:40.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.676 "prchk_reftag": false, 00:36:40.676 "prchk_guard": false, 00:36:40.676 "ctrlr_loss_timeout_sec": 0, 00:36:40.676 "reconnect_delay_sec": 0, 00:36:40.676 "fast_io_fail_timeout_sec": 0, 00:36:40.676 "psk": "key0", 00:36:40.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.676 "hdgst": false, 00:36:40.676 "ddgst": false, 00:36:40.676 "multipath": "multipath" 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "bdev_nvme_set_hotplug", 00:36:40.676 "params": { 00:36:40.676 "period_us": 100000, 00:36:40.676 "enable": false 00:36:40.676 } 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "method": "bdev_wait_for_examine" 00:36:40.676 } 00:36:40.676 ] 00:36:40.676 }, 00:36:40.676 { 00:36:40.676 "subsystem": "nbd", 00:36:40.676 "config": [] 00:36:40.676 } 00:36:40.676 ] 00:36:40.676 }' 00:36:40.676 05:58:28 keyring_file -- keyring/file.sh@115 -- # killprocess 2044229 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2044229 ']' 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2044229 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044229 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044229' 00:36:40.677 killing process with pid 2044229 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@973 -- # kill 2044229 00:36:40.677 Received shutdown signal, test time was about 1.000000 seconds 00:36:40.677 00:36:40.677 Latency(us) 00:36:40.677 [2024-11-27T04:58:28.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:40.677 [2024-11-27T04:58:28.681Z] =================================================================================================================== 00:36:40.677 [2024-11-27T04:58:28.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:40.677 05:58:28 keyring_file -- common/autotest_common.sh@978 -- # wait 2044229 00:36:40.936 05:58:28 keyring_file -- keyring/file.sh@118 -- # bperfpid=2045741 00:36:40.936 05:58:28 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2045741 /var/tmp/bperf.sock 00:36:40.936 05:58:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2045741 ']' 00:36:40.936 05:58:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.936 05:58:28 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:40.936 05:58:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:40.936 05:58:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.936 05:58:28 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:40.936 "subsystems": [ 00:36:40.936 { 00:36:40.936 "subsystem": "keyring", 00:36:40.936 "config": [ 00:36:40.936 { 00:36:40.936 "method": "keyring_file_add_key", 00:36:40.936 "params": { 00:36:40.936 "name": "key0", 00:36:40.936 "path": "/tmp/tmp.BKSsVimRRj" 00:36:40.936 } 00:36:40.936 }, 00:36:40.936 { 00:36:40.936 "method": "keyring_file_add_key", 00:36:40.936 "params": { 00:36:40.936 "name": "key1", 00:36:40.936 "path": "/tmp/tmp.uekWCfUPt7" 00:36:40.936 } 00:36:40.936 } 00:36:40.936 ] 00:36:40.936 }, 00:36:40.936 { 00:36:40.936 "subsystem": "iobuf", 00:36:40.936 "config": [ 00:36:40.936 { 00:36:40.936 "method": "iobuf_set_options", 00:36:40.936 "params": { 00:36:40.936 "small_pool_count": 8192, 00:36:40.936 "large_pool_count": 1024, 00:36:40.936 "small_bufsize": 8192, 00:36:40.936 "large_bufsize": 135168, 00:36:40.936 "enable_numa": false 00:36:40.936 } 00:36:40.936 } 00:36:40.936 ] 00:36:40.936 }, 00:36:40.936 { 00:36:40.936 "subsystem": "sock", 00:36:40.936 "config": [ 00:36:40.936 { 00:36:40.937 "method": "sock_set_default_impl", 00:36:40.937 "params": { 00:36:40.937 "impl_name": "posix" 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "sock_impl_set_options", 00:36:40.937 "params": { 00:36:40.937 "impl_name": "ssl", 00:36:40.937 "recv_buf_size": 4096, 00:36:40.937 "send_buf_size": 4096, 00:36:40.937 "enable_recv_pipe": true, 00:36:40.937 "enable_quickack": false, 00:36:40.937 "enable_placement_id": 0, 00:36:40.937 "enable_zerocopy_send_server": true, 00:36:40.937 "enable_zerocopy_send_client": false, 00:36:40.937 "zerocopy_threshold": 0, 00:36:40.937 "tls_version": 0, 00:36:40.937 "enable_ktls": false 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "sock_impl_set_options", 00:36:40.937 "params": { 00:36:40.937 "impl_name": "posix", 00:36:40.937 "recv_buf_size": 2097152, 00:36:40.937 "send_buf_size": 2097152, 00:36:40.937 "enable_recv_pipe": true, 00:36:40.937 "enable_quickack": false, 00:36:40.937 "enable_placement_id": 0, 00:36:40.937 "enable_zerocopy_send_server": true, 00:36:40.937 "enable_zerocopy_send_client": false, 00:36:40.937 "zerocopy_threshold": 0, 00:36:40.937 "tls_version": 0, 00:36:40.937 "enable_ktls": false 00:36:40.937 } 00:36:40.937 } 00:36:40.937 ] 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "subsystem": "vmd", 00:36:40.937 "config": [] 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "subsystem": "accel", 00:36:40.937 "config": [ 00:36:40.937 { 00:36:40.937 "method": "accel_set_options", 00:36:40.937 "params": { 00:36:40.937 "small_cache_size": 128, 00:36:40.937 "large_cache_size": 16, 00:36:40.937 "task_count": 2048, 00:36:40.937 "sequence_count": 2048, 00:36:40.937 "buf_count": 2048 00:36:40.937 } 00:36:40.937 } 00:36:40.937 ] 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "subsystem": "bdev", 00:36:40.937 "config": [ 00:36:40.937 { 00:36:40.937 "method": "bdev_set_options", 00:36:40.937 "params": { 00:36:40.937 "bdev_io_pool_size": 65535, 00:36:40.937 "bdev_io_cache_size": 256, 00:36:40.937 "bdev_auto_examine": true, 00:36:40.937 "iobuf_small_cache_size": 128, 00:36:40.937 "iobuf_large_cache_size": 16 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "bdev_raid_set_options", 00:36:40.937 "params": { 00:36:40.937 "process_window_size_kb": 1024, 00:36:40.937 "process_max_bandwidth_mb_sec": 0 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "bdev_iscsi_set_options", 00:36:40.937 "params": { 00:36:40.937 "timeout_sec": 30 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "bdev_nvme_set_options", 00:36:40.937 "params": { 00:36:40.937 "action_on_timeout": "none", 00:36:40.937 "timeout_us": 0, 00:36:40.937 "timeout_admin_us": 0, 00:36:40.937 "keep_alive_timeout_ms": 10000, 00:36:40.937 "arbitration_burst": 0, 00:36:40.937 "low_priority_weight": 0, 00:36:40.937 "medium_priority_weight": 0, 00:36:40.937 "high_priority_weight": 0, 00:36:40.937 "nvme_adminq_poll_period_us": 10000, 00:36:40.937 "nvme_ioq_poll_period_us": 0, 00:36:40.937 "io_queue_requests": 512, 00:36:40.937 "delay_cmd_submit": true, 00:36:40.937 "transport_retry_count": 4, 00:36:40.937 "bdev_retry_count": 3, 00:36:40.937 "transport_ack_timeout": 0, 00:36:40.937 "ctrlr_loss_timeout_sec": 0, 00:36:40.937 "reconnect_delay_sec": 0, 00:36:40.937 "fast_io_fail_timeout_sec": 0, 00:36:40.937 "disable_auto_failback": false, 00:36:40.937 "generate_uuids": false, 00:36:40.937 "transport_tos": 0, 00:36:40.937 "nvme_error_stat": false, 00:36:40.937 "rdma_srq_size": 0, 00:36:40.937 "io_path_stat": false, 00:36:40.937 "allow_accel_sequence": false, 00:36:40.937 "rdma_max_cq_size": 0, 00:36:40.937 "rdma_cm_event_timeout_ms": 0, 00:36:40.937 "dhchap_digests": [ 00:36:40.937 "sha256", 00:36:40.937 "sha384", 00:36:40.937 "sha512" 00:36:40.937 ], 00:36:40.937 "dhchap_dhgroups": [ 00:36:40.937 "null", 00:36:40.937 "ffdhe2048", 00:36:40.937 "ffdhe3072", 00:36:40.937 "ffdhe4096", 00:36:40.937 "ffdhe6144", 00:36:40.937 "ffdhe8192" 00:36:40.937 ] 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "bdev_nvme_attach_controller", 00:36:40.937 "params": { 00:36:40.937 "name": "nvme0", 00:36:40.937 "trtype": "TCP", 00:36:40.937 "adrfam": "IPv4", 00:36:40.937 "traddr": "127.0.0.1", 00:36:40.937 "trsvcid": "4420", 00:36:40.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.937 "prchk_reftag": false, 00:36:40.937 "prchk_guard": false, 00:36:40.937 "ctrlr_loss_timeout_sec": 0, 00:36:40.937 "reconnect_delay_sec": 0, 00:36:40.937 "fast_io_fail_timeout_sec": 0, 00:36:40.937 "psk": "key0", 00:36:40.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.937 "hdgst": false, 00:36:40.937 "ddgst": false, 00:36:40.937 "multipath": "multipath" 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "bdev_nvme_set_hotplug", 00:36:40.937 "params": { 00:36:40.937 "period_us": 100000, 00:36:40.937 "enable": false 00:36:40.937 } 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "method": "bdev_wait_for_examine" 00:36:40.937 } 00:36:40.937 ] 00:36:40.937 }, 00:36:40.937 { 00:36:40.937 "subsystem": "nbd", 00:36:40.937 "config": [] 00:36:40.937 } 00:36:40.937 ] 00:36:40.937 }' 00:36:40.937 05:58:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:40.937 05:58:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:40.937 [2024-11-27 05:58:28.739465] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:36:40.937 [2024-11-27 05:58:28.739515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045741 ] 00:36:40.937 [2024-11-27 05:58:28.813220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.937 [2024-11-27 05:58:28.853161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.252 [2024-11-27 05:58:29.014741] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:41.914 05:58:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.914 05:58:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:41.914 05:58:29 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:41.914 05:58:29 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:41.914 05:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.914 05:58:29 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:41.914 05:58:29 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:41.914 05:58:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.914 05:58:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.914 05:58:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.914 05:58:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.914 05:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.182 05:58:29 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:42.182 05:58:29 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:42.182 05:58:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:42.182 05:58:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.182 05:58:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.182 05:58:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:42.182 05:58:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.182 05:58:30 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:42.182 05:58:30 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:42.182 05:58:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:42.182 05:58:30 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:42.441 05:58:30 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:42.441 05:58:30 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:42.441 05:58:30 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BKSsVimRRj /tmp/tmp.uekWCfUPt7 00:36:42.441 05:58:30 keyring_file -- keyring/file.sh@20 -- # killprocess 2045741 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2045741 ']' 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2045741 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2045741 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2045741' 00:36:42.441 killing process with pid 2045741 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@973 -- # kill 2045741 00:36:42.441 Received shutdown signal, test time was about 1.000000 seconds 00:36:42.441 00:36:42.441 Latency(us) 00:36:42.441 [2024-11-27T04:58:30.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.441 [2024-11-27T04:58:30.445Z] =================================================================================================================== 00:36:42.441 [2024-11-27T04:58:30.445Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:42.441 05:58:30 keyring_file -- common/autotest_common.sh@978 -- # wait 2045741 00:36:42.700 05:58:30 keyring_file -- keyring/file.sh@21 -- # killprocess 2044215 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2044215 ']' 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2044215 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2044215 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2044215' 00:36:42.700 killing process with pid 2044215 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@973 -- # kill 2044215 00:36:42.700 05:58:30 keyring_file -- common/autotest_common.sh@978 -- # wait 2044215 00:36:42.959 00:36:42.959 real 0m11.679s 00:36:42.959 user 0m28.973s 00:36:42.959 sys 0m2.744s 00:36:42.959 05:58:30 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.959 05:58:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:42.959 ************************************ 00:36:42.959 END TEST keyring_file 00:36:42.959 ************************************ 00:36:43.219 05:58:30 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:43.219 05:58:30 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:43.219 05:58:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:43.219 05:58:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:43.219 05:58:30 -- common/autotest_common.sh@10 -- # set +x 00:36:43.219 ************************************ 00:36:43.219 START TEST keyring_linux 00:36:43.219 ************************************ 00:36:43.220 05:58:30 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:43.220 Joined session keyring: 84514080 00:36:43.220 * Looking for test storage... 00:36:43.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.220 --rc genhtml_branch_coverage=1 00:36:43.220 --rc genhtml_function_coverage=1 00:36:43.220 --rc genhtml_legend=1 00:36:43.220 --rc geninfo_all_blocks=1 00:36:43.220 --rc geninfo_unexecuted_blocks=1 00:36:43.220 00:36:43.220 ' 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.220 --rc genhtml_branch_coverage=1 00:36:43.220 --rc genhtml_function_coverage=1 00:36:43.220 --rc genhtml_legend=1 00:36:43.220 --rc geninfo_all_blocks=1 00:36:43.220 --rc geninfo_unexecuted_blocks=1 00:36:43.220 00:36:43.220 ' 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.220 --rc genhtml_branch_coverage=1 00:36:43.220 --rc genhtml_function_coverage=1 00:36:43.220 --rc genhtml_legend=1 00:36:43.220 --rc geninfo_all_blocks=1 00:36:43.220 --rc geninfo_unexecuted_blocks=1 00:36:43.220 00:36:43.220 ' 00:36:43.220 05:58:31 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:43.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:43.220 --rc genhtml_branch_coverage=1 00:36:43.220 --rc genhtml_function_coverage=1 00:36:43.220 --rc genhtml_legend=1 00:36:43.220 --rc geninfo_all_blocks=1 00:36:43.220 --rc geninfo_unexecuted_blocks=1 00:36:43.220 00:36:43.220 ' 00:36:43.220 05:58:31 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:43.220 05:58:31 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:43.220 05:58:31 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.220 05:58:31 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.220 05:58:31 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.220 05:58:31 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:43.220 05:58:31 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:43.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:43.220 05:58:31 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:43.220 05:58:31 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:43.220 05:58:31 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:43.220 05:58:31 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:43.220 05:58:31 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:43.220 05:58:31 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:43.220 05:58:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:43.220 05:58:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:43.481 /tmp/:spdk-test:key0 00:36:43.481 05:58:31 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:43.481 05:58:31 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:43.481 05:58:31 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:43.481 05:58:31 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:43.481 05:58:31 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:43.481 05:58:31 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:43.481 05:58:31 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:43.481 05:58:31 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:43.481 /tmp/:spdk-test:key1 00:36:43.481 05:58:31 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2046305 00:36:43.481 05:58:31 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2046305 00:36:43.481 05:58:31 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:43.481 05:58:31 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2046305 ']' 00:36:43.481 05:58:31 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.481 05:58:31 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.481 05:58:31 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.481 05:58:31 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.481 05:58:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:43.481 [2024-11-27 05:58:31.339642] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:36:43.481 [2024-11-27 05:58:31.339697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046305 ] 00:36:43.481 [2024-11-27 05:58:31.399346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.481 [2024-11-27 05:58:31.442117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.740 05:58:31 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.740 05:58:31 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:43.740 05:58:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:43.741 [2024-11-27 05:58:31.659510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.741 null0 00:36:43.741 [2024-11-27 05:58:31.691561] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:43.741 [2024-11-27 05:58:31.691935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.741 05:58:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:43.741 446895869 00:36:43.741 05:58:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:43.741 470857892 00:36:43.741 05:58:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2046314 00:36:43.741 05:58:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2046314 /var/tmp/bperf.sock 00:36:43.741 05:58:31 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2046314 ']' 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:43.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.741 05:58:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:44.000 [2024-11-27 05:58:31.762212] Starting SPDK v25.01-pre git sha1 a640d9f98 / DPDK 24.03.0 initialization... 00:36:44.000 [2024-11-27 05:58:31.762255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046314 ] 00:36:44.000 [2024-11-27 05:58:31.834514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.000 [2024-11-27 05:58:31.874634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.000 05:58:31 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:44.000 05:58:31 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:44.000 05:58:31 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:44.000 05:58:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:44.259 05:58:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:44.259 05:58:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:44.518 05:58:32 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:44.518 05:58:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:44.778 [2024-11-27 05:58:32.520633] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:44.778 nvme0n1 00:36:44.778 05:58:32 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:44.778 05:58:32 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:44.778 05:58:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:44.778 05:58:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:44.778 05:58:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:44.778 05:58:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.037 05:58:32 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:45.037 05:58:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:45.037 05:58:32 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:45.037 05:58:32 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:45.037 05:58:32 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.037 05:58:32 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:45.037 05:58:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.037 05:58:32 keyring_linux -- keyring/linux.sh@25 -- # sn=446895869 00:36:45.037 05:58:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:45.037 05:58:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:45.037 05:58:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 446895869 == \4\4\6\8\9\5\8\6\9 ]] 00:36:45.037 05:58:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 446895869 00:36:45.037 05:58:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:45.037 05:58:33 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:45.297 Running I/O for 1 seconds... 00:36:46.235 21669.00 IOPS, 84.64 MiB/s 00:36:46.235 Latency(us) 00:36:46.235 [2024-11-27T04:58:34.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.235 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:46.235 nvme0n1 : 1.01 21669.35 84.65 0.00 0.00 5887.83 3417.23 8426.06 00:36:46.235 [2024-11-27T04:58:34.239Z] =================================================================================================================== 00:36:46.235 [2024-11-27T04:58:34.239Z] Total : 21669.35 84.65 0.00 0.00 5887.83 3417.23 8426.06 00:36:46.235 { 00:36:46.235 "results": [ 00:36:46.235 { 00:36:46.235 "job": "nvme0n1", 00:36:46.235 "core_mask": "0x2", 00:36:46.235 "workload": "randread", 00:36:46.235 "status": "finished", 00:36:46.235 "queue_depth": 128, 00:36:46.235 "io_size": 4096, 00:36:46.235 "runtime": 1.005891, 00:36:46.235 "iops": 21669.345883400885, 00:36:46.235 "mibps": 84.64588235703471, 00:36:46.235 "io_failed": 0, 00:36:46.235 "io_timeout": 0, 00:36:46.235 "avg_latency_us": 5887.828750570743, 00:36:46.235 "min_latency_us": 3417.2342857142858, 00:36:46.235 "max_latency_us": 8426.057142857142 00:36:46.235 } 00:36:46.235 ], 00:36:46.235 "core_count": 1 00:36:46.235 } 00:36:46.235 05:58:34 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:46.235 05:58:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:46.494 05:58:34 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:46.494 05:58:34 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:46.494 05:58:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:46.494 05:58:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:46.494 05:58:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:46.494 05:58:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.753 05:58:34 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:46.753 05:58:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:46.753 05:58:34 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:46.753 05:58:34 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:46.753 05:58:34 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:46.753 05:58:34 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:46.753 05:58:34 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:46.753 05:58:34 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:46.753 05:58:34 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:46.753 05:58:34 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:46.753 05:58:34 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:46.753 05:58:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:46.753 [2024-11-27 05:58:34.680388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:46.754 [2024-11-27 05:58:34.681121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb6fa0 (107): Transport endpoint is not connected 00:36:46.754 [2024-11-27 05:58:34.682117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb6fa0 (9): Bad file descriptor 00:36:46.754 [2024-11-27 05:58:34.683118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:46.754 [2024-11-27 05:58:34.683132] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:46.754 [2024-11-27 05:58:34.683139] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:46.754 [2024-11-27 05:58:34.683147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:46.754 request: 00:36:46.754 { 00:36:46.754 "name": "nvme0", 00:36:46.754 "trtype": "tcp", 00:36:46.754 "traddr": "127.0.0.1", 00:36:46.754 "adrfam": "ipv4", 00:36:46.754 "trsvcid": "4420", 00:36:46.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:46.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:46.754 "prchk_reftag": false, 00:36:46.754 "prchk_guard": false, 00:36:46.754 "hdgst": false, 00:36:46.754 "ddgst": false, 00:36:46.754 "psk": ":spdk-test:key1", 00:36:46.754 "allow_unrecognized_csi": false, 00:36:46.754 "method": "bdev_nvme_attach_controller", 00:36:46.754 "req_id": 1 00:36:46.754 } 00:36:46.754 Got JSON-RPC error response 00:36:46.754 response: 00:36:46.754 { 00:36:46.754 "code": -5, 00:36:46.754 "message": "Input/output error" 00:36:46.754 } 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@33 -- # sn=446895869 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 446895869 00:36:46.754 1 links removed 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@33 -- # sn=470857892 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 470857892 00:36:46.754 1 links removed 00:36:46.754 05:58:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2046314 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2046314 ']' 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2046314 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.754 05:58:34 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046314 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046314' 00:36:47.014 killing process with pid 2046314 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@973 -- # kill 2046314 00:36:47.014 Received shutdown signal, test time was about 1.000000 seconds 00:36:47.014 00:36:47.014 Latency(us) 00:36:47.014 [2024-11-27T04:58:35.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:47.014 [2024-11-27T04:58:35.018Z] =================================================================================================================== 00:36:47.014 [2024-11-27T04:58:35.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@978 -- # wait 2046314 00:36:47.014 05:58:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2046305 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2046305 ']' 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2046305 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046305 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046305' 00:36:47.014 killing process with pid 2046305 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@973 -- # kill 2046305 00:36:47.014 05:58:34 keyring_linux -- common/autotest_common.sh@978 -- # wait 2046305 00:36:47.582 00:36:47.582 real 0m4.298s 00:36:47.582 user 0m8.106s 00:36:47.582 sys 0m1.454s 00:36:47.582 05:58:35 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.582 05:58:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:47.582 ************************************ 00:36:47.582 END TEST keyring_linux 00:36:47.582 ************************************ 00:36:47.582 05:58:35 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:47.582 05:58:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:47.582 05:58:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:47.583 05:58:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:47.583 05:58:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:47.583 05:58:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:47.583 05:58:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:47.583 05:58:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:47.583 05:58:35 -- common/autotest_common.sh@10 -- # set +x 00:36:47.583 05:58:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:47.583 05:58:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:47.583 05:58:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:47.583 05:58:35 -- common/autotest_common.sh@10 -- # set +x 00:36:52.861 INFO: APP EXITING 00:36:52.861 INFO: killing all VMs 00:36:52.861 INFO: killing vhost app 00:36:52.861 INFO: EXIT DONE 00:36:55.400 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:55.400 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:55.400 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:58.706 Cleaning 00:36:58.706 Removing: /var/run/dpdk/spdk0/config 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:58.706 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:58.706 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:58.706 Removing: /var/run/dpdk/spdk1/config 00:36:58.706 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:58.707 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:58.707 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:58.707 Removing: /var/run/dpdk/spdk2/config 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:58.707 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:58.707 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:58.707 Removing: /var/run/dpdk/spdk3/config 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:58.707 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:58.707 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:58.707 Removing: /var/run/dpdk/spdk4/config 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:58.707 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:58.707 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:58.707 Removing: /dev/shm/bdev_svc_trace.1 00:36:58.707 Removing: /dev/shm/nvmf_trace.0 00:36:58.707 Removing: /dev/shm/spdk_tgt_trace.pid1566241 00:36:58.707 Removing: /var/run/dpdk/spdk0 00:36:58.707 Removing: /var/run/dpdk/spdk1 00:36:58.707 Removing: /var/run/dpdk/spdk2 00:36:58.707 Removing: /var/run/dpdk/spdk3 00:36:58.707 Removing: /var/run/dpdk/spdk4 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1563884 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1564943 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1566241 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1566894 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1567845 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1567942 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1568971 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1569066 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1569418 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1571125 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1572842 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1573238 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1573450 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1573652 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1573909 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1574161 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1574409 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1574696 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1575427 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1578431 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1578690 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1578891 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1578947 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1579437 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1579460 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1579948 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1579954 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1580218 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1580444 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1580701 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1580711 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1581270 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1581496 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1581826 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1585539 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1590026 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1600061 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1600654 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1605035 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1605363 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1609710 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1615737 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1618789 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1628992 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1638061 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1639766 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1640696 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1658006 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1662074 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1707825 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1713226 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1719493 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1725986 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1725988 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1726903 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1727815 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1728729 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1729198 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1729204 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1729440 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1729668 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1729672 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1730589 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1731416 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1732209 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1732886 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1732894 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1733125 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1734147 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1735127 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1743435 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1772348 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1776858 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1778459 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1780296 00:36:58.707 Removing: /var/run/dpdk/spdk_pid1780415 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1780550 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1780778 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1781282 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1782959 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1783841 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1784195 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1786454 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1786909 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1787610 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1792265 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1797662 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1797663 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1797664 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1801450 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1810004 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1813965 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1819803 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1821107 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1822635 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1823963 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1828673 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1833006 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1836970 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1844935 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1844975 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1849651 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1849878 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1850110 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1850507 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1850575 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1855250 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1855784 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1860208 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1862925 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1868354 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1873680 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1882276 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1889760 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1889768 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1908557 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1909034 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1909719 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1910208 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1910945 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1911420 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1911945 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1912584 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1916632 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1916868 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1922928 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1923204 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1928563 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1932916 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1943147 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1943621 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1947898 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1948339 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1952596 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1958239 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1960817 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1970870 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1979648 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1981375 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1982681 00:36:58.967 Removing: /var/run/dpdk/spdk_pid1998835 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2002642 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2005489 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2013355 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2013470 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2018668 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2020526 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2022492 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2023762 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2025862 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2027304 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2036050 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2036512 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2036971 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2039452 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2039920 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2040390 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2044215 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2044229 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2045741 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2046305 00:36:59.227 Removing: /var/run/dpdk/spdk_pid2046314 00:36:59.227 Clean 00:36:59.227 05:58:47 -- common/autotest_common.sh@1453 -- # return 0 00:36:59.227 05:58:47 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:59.227 05:58:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:59.227 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:36:59.227 05:58:47 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:59.227 05:58:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:59.227 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:36:59.227 05:58:47 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:59.227 05:58:47 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:59.227 05:58:47 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:59.227 05:58:47 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:59.227 05:58:47 -- spdk/autotest.sh@398 -- # hostname 00:36:59.227 05:58:47 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:59.486 geninfo: WARNING: invalid characters removed from testname! 00:37:21.424 05:59:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.802 05:59:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:24.708 05:59:12 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:26.617 05:59:14 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:28.523 05:59:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:30.428 05:59:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:31.804 05:59:19 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:31.804 05:59:19 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:31.804 05:59:19 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:31.804 05:59:19 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:31.804 05:59:19 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:31.804 05:59:19 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:32.064 + [[ -n 1487114 ]] 00:37:32.064 + sudo kill 1487114 00:37:32.074 [Pipeline] } 00:37:32.092 [Pipeline] // stage 00:37:32.098 [Pipeline] } 00:37:32.113 [Pipeline] // timeout 00:37:32.117 [Pipeline] } 00:37:32.131 [Pipeline] // catchError 00:37:32.136 [Pipeline] } 00:37:32.150 [Pipeline] // wrap 00:37:32.156 [Pipeline] } 00:37:32.168 [Pipeline] // catchError 00:37:32.177 [Pipeline] stage 00:37:32.179 [Pipeline] { (Epilogue) 00:37:32.191 [Pipeline] catchError 00:37:32.193 [Pipeline] { 00:37:32.205 [Pipeline] echo 00:37:32.207 Cleanup processes 00:37:32.212 [Pipeline] sh 00:37:32.499 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:32.499 2056986 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:32.514 [Pipeline] sh 00:37:32.802 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:32.802 ++ grep -v 'sudo pgrep' 00:37:32.802 ++ awk '{print $1}' 00:37:32.802 + sudo kill -9 00:37:32.802 + true 00:37:32.815 [Pipeline] sh 00:37:33.105 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:45.331 [Pipeline] sh 00:37:45.617 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:45.617 Artifacts sizes are good 00:37:45.634 [Pipeline] archiveArtifacts 00:37:45.643 Archiving artifacts 00:37:45.779 [Pipeline] sh 00:37:46.065 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:46.081 [Pipeline] cleanWs 00:37:46.091 [WS-CLEANUP] Deleting project workspace... 00:37:46.091 [WS-CLEANUP] Deferred wipeout is used... 00:37:46.098 [WS-CLEANUP] done 00:37:46.100 [Pipeline] } 00:37:46.118 [Pipeline] // catchError 00:37:46.130 [Pipeline] sh 00:37:46.520 + logger -p user.info -t JENKINS-CI 00:37:46.529 [Pipeline] } 00:37:46.541 [Pipeline] // stage 00:37:46.546 [Pipeline] } 00:37:46.560 [Pipeline] // node 00:37:46.565 [Pipeline] End of Pipeline 00:37:46.596 Finished: SUCCESS